Search results for: feature detection
690 Remote Vital Signs Monitoring in Neonatal Intensive Care Unit Using a Digital Camera
Authors: Fatema-Tuz-Zohra Khanam, Ali Al-Naji, Asanka G. Perera, Kim Gibson, Javaan Chahl
Abstract:
Conventional contact-based vital signs monitoring sensors such as pulse oximeters or electrocardiogram (ECG) may cause discomfort, skin damage, and infections, particularly in neonates with fragile, sensitive skin. Therefore, remote monitoring of the vital sign is desired in both clinical and non-clinical settings to overcome these issues. Camera-based vital signs monitoring is a recent technology for these applications with many positive attributes. However, there are still limited camera-based studies on neonates in a clinical setting. In this study, the heart rate (HR) and respiratory rate (RR) of eight infants at the Neonatal Intensive Care Unit (NICU) in Flinders Medical Centre were remotely monitored using a digital camera applying color and motion-based computational methods. The region-of-interest (ROI) was efficiently selected by incorporating an image decomposition method. Furthermore, spatial averaging, spectral analysis, band-pass filtering, and peak detection were also used to extract both HR and RR. The experimental results were validated with the ground truth data obtained from an ECG monitor and showed a strong correlation using the Pearson correlation coefficient (PCC) 0.9794 and 0.9412 for HR and RR, respectively. The RMSE between camera-based data and ECG data for HR and RR were 2.84 beats/min and 2.91 breaths/min, respectively. A Bland Altman analysis of the data also showed a close correlation between both data sets with a mean bias of 0.60 beats/min and 1 breath/min, and the lower and upper limit of agreement -4.9 to + 6.1 beats/min and -4.4 to +6.4 breaths/min for both HR and RR, respectively. Therefore, video camera imaging may replace conventional contact-based monitoring in NICU and has potential applications in other contexts such as home health monitoring.Keywords: neonates, NICU, digital camera, heart rate, respiratory rate, image decomposition
Procedia PDF Downloads 104689 A Robust Visual Simultaneous Localization and Mapping for Indoor Dynamic Environment
Authors: Xiang Zhang, Daohong Yang, Ziyuan Wu, Lei Li, Wanting Zhou
Abstract:
Visual Simultaneous Localization and Mapping (VSLAM) uses cameras to collect information in unknown environments to realize simultaneous localization and environment map construction, which has a wide range of applications in autonomous driving, virtual reality and other related fields. At present, the related research achievements about VSLAM can maintain high accuracy in static environment. But in dynamic environment, due to the presence of moving objects in the scene, the movement of these objects will reduce the stability of VSLAM system, resulting in inaccurate localization and mapping, or even failure. In this paper, a robust VSLAM method was proposed to effectively deal with the problem in dynamic environment. We proposed a dynamic region removal scheme based on semantic segmentation neural networks and geometric constraints. Firstly, semantic extraction neural network is used to extract prior active motion region, prior static region and prior passive motion region in the environment. Then, the light weight frame tracking module initializes the transform pose between the previous frame and the current frame on the prior static region. A motion consistency detection module based on multi-view geometry and scene flow is used to divide the environment into static region and dynamic region. Thus, the dynamic object region was successfully eliminated. Finally, only the static region is used for tracking thread. Our research is based on the ORBSLAM3 system, which is one of the most effective VSLAM systems available. We evaluated our method on the TUM RGB-D benchmark and the results demonstrate that the proposed VSLAM method improves the accuracy of the original ORBSLAM3 by 70%˜98.5% under high dynamic environment.Keywords: dynamic scene, dynamic visual SLAM, semantic segmentation, scene flow, VSLAM
Procedia PDF Downloads 116688 Ion Beam Writing and Implantation in Graphene Oxide, Reduced Graphene Oxide and Polyimide Through Polymer Mask for Sensorics Applications
Authors: Jan Luxa, Vlastimil Mazanek, Petr Malinsky, Alexander Romanenko, Mariapompea Cutroneo, Vladimir Havranek, Josef Novak, Eva Stepanovska, Anna Mackova, Zdenek Sofer
Abstract:
Using accelerated energetic ions is an interesting method for the introduction of structural changes in various carbon-based materials. This way, the properties can be altered in two ways: a) the ions lead to the formation of conductive pathways in graphene oxide structures due to the elimination of oxygen functionalities and b) doping with selected ions to form metal nanoclusters, thus increasing the conductivity. In this work, energetic beams were employed in two ways to prepare capacitor structures in graphene oxide (GO), reduced graphene oxide (rGO) and polyimide (PI) on a micro-scale. The first method revolved around using ion beam writing with a focused ion beam, and the method involved ion implantation via a polymeric mask. To prepare the polymeric mask, a direct spin-coating of PMMA on top of the foils was used. Subsequently, proton beam writing and development in isopropyl alcohol were employed. Finally, the mask was removed using acetone solvent. All three materials were exposed to ion beams with an energy of 2.5-5 MeV and an ion fluence of 3.75x10¹⁴ cm-² (1800 nC.mm-²). Thus, prepared microstructures were thoroughly characterized by various analytical methods, including Scanning electron microscopy (SEM) with Energy-Dispersive X-ray spectroscopy (EDS), X-ray Photoelectron spectroscopy (XPS), micro-Raman spectroscopy, Rutherford Back-scattering Spectroscopy (RBS) and Elastic Recoil Detection Analysis (ERDA) spectroscopy. Finally, these materials were employed and tested as sensors for humidity using electrical conductivity measurements. The results clearly demonstrate that the type of ions, their energy and fluence all have a significant influence on the sensory properties of thus prepared sensors.Keywords: graphene, graphene oxide, polyimide, ion implantation, sensors
Procedia PDF Downloads 85687 Improvements in Transient Testing in The Transient REActor Test (TREAT) with a Choice of Filter
Authors: Harish Aryal
Abstract:
The safe and reliable operation of nuclear reactors has always been one of the topmost priorities in the nuclear industry. Transient testing allows us to understand the time-dependent behavior of the neutron population in response to either a planned change in the reactor conditions or unplanned circumstances. These unforeseen conditions might occur due to sudden reactivity insertions, feedback, power excursions, instabilities, and accidents. To study such behavior, we need transient testing, which is like car crash testing, to estimate the durability and strength of a car design. In nuclear designs, such transient testing can simulate a wide range of accidents due to sudden reactivity insertions and helps to study the feasibility and integrity of the fuel to be used in certain reactor types. This testing involves a high neutron flux environment and real-time imaging technology with advanced instrumentation with appropriate accuracy and resolution to study the fuel slumping behavior. With the aid of transient testing and adequate imaging tools, it is possible to test the safety basis for reactor and fuel designs that serves as a gateway in licensing advanced reactors in the future. To that end, it is crucial to fully understand advanced imaging techniques both analytically and via simulations. This paper presents an innovative method of supporting real-time imaging of fuel pins and other structures during transient testing. The major fuel-motion detection device that is studied in this dissertation is the Hodoscope which requires collimators. This paper provides 1) an MCNP model and simulation of a Transient Reactor Test (TREAT) core with a central fuel element replaced by a slotted fuel element that provides an open path between test samples and a hodoscope detector and 2) a choice of good filter to improve image resolution.Keywords: hodoscope, transient testing, collimators, MCNP, TREAT, hodogram, filters
Procedia PDF Downloads 77686 Innovative Screening Tool Based on Physical Properties of Blood
Authors: Basant Singh Sikarwar, Mukesh Roy, Ayush Goyal, Priya Ranjan
Abstract:
This work combines two bodies of knowledge which includes biomedical basis of blood stain formation and fluid communities’ wisdom that such formation of blood stain depends heavily on physical properties. Moreover biomedical research tells that different patterns in stains of blood are robust indicator of blood donor’s health or lack thereof. Based on these valuable insights an innovative screening tool is proposed which can act as an aide in the diagnosis of diseases such Anemia, Hyperlipidaemia, Tuberculosis, Blood cancer, Leukemia, Malaria etc., with enhanced confidence in the proposed analysis. To realize this powerful technique, simple, robust and low-cost micro-fluidic devices, a micro-capillary viscometer and a pendant drop tensiometer are designed and proposed to be fabricated to measure the viscosity, surface tension and wettability of various blood samples. Once prognosis and diagnosis data has been generated, automated linear and nonlinear classifiers have been applied into the automated reasoning and presentation of results. A support vector machine (SVM) classifies data on a linear fashion. Discriminant analysis and nonlinear embedding’s are coupled with nonlinear manifold detection in data and detected decisions are made accordingly. In this way, physical properties can be used, using linear and non-linear classification techniques, for screening of various diseases in humans and cattle. Experiments are carried out to validate the physical properties measurement devices. This framework can be further developed towards a real life portable disease screening cum diagnostics tool. Small-scale production of screening cum diagnostic devices is proposed to carry out independent test.Keywords: blood, physical properties, diagnostic, nonlinear, classifier, device, surface tension, viscosity, wettability
Procedia PDF Downloads 376685 Bioinformatics Identification of Rare Codon Clusters in Proteins Structure of HBV
Authors: Abdorrasoul Malekpour, Mohammad Ghorbani Mojtaba Mortazavi, Mohammadreza Fattahi, Mohammad Hassan Meshkibaf, Ali Fakhrzad, Saeid Salehi, Saeideh Zahedi, Amir Ahmadimoghaddam, Parviz Farzadnia Dr., Mohammadreza Hajyani Asl Bs
Abstract:
Hepatitis B as an infectious disease has eight main genotypes (A–H). The aim of this study is to Bioinformatically identify Rare Codon Clusters (RCC) in proteins structure of HBV. For detection of protein family accession numbers (Pfam) of HBV proteins; used of uni-prot database and Pfam search tool were used. Obtained Pfam IDs were analyzed in Sherlocc program and RCCs in HBV proteins were detected. In further, the structures of TrEMBL entries proteins studied in PDB database and 3D structures of the HBV proteins and locations of RCCs were visualized and studied using Swiss PDB Viewer software. Pfam search tool have found nine significant hits and 0 insignificant hits in 3 frames. Results of Pfams studied in the Sherlocc program show this program not identified RCCs in the external core antigen (PF08290) and truncated HBeAg protein (PF08290). By contrast the RCCs become identified in Hepatitis core antigen (PF00906) Large envelope protein S (PF00695), X protein (PF00739), DNA polymerase (viral) N-terminal domain (PF00242) and Protein P (Pf00336). In HBV genome, seven RCC identified that found in hepatitis core antigen, large envelope protein S and DNA polymerase proteins and proteins structures of TrEMBL entries sequences that reported in Sherlocc program outputs are not complete. Based on situation of RCC in structure of HBV proteins, it suggested those RCCs are important in HBV life cycle. We hoped that this study provide a new and deep perspective in protein research and drug design for treatment of HBV.Keywords: rare codon clusters, hepatitis B virus, bioinformatic study, infectious disease
Procedia PDF Downloads 488684 Improving Search Engine Performance by Removing Indexes to Malicious URLs
Authors: Durga Toshniwal, Lokesh Agrawal
Abstract:
As the web continues to play an increasing role in information exchange, and conducting daily activities, computer users have become the target of miscreants which infects hosts with malware or adware for financial gains. Unfortunately, even a single visit to compromised web site enables the attacker to detect vulnerabilities in the user’s applications and force the downloading of multitude of malware binaries. We provide an approach to effectively scan the so-called drive-by downloads on the Internet. Drive-by downloads are result of URLs that attempt to exploit their visitors and cause malware to be installed and run automatically. To scan the web for malicious pages, the first step is to use a crawler to collect URLs that live on the Internet, and then to apply fast prefiltering techniques to reduce the amount of pages that are needed to be examined by precise, but slower, analysis tools (such as honey clients or antivirus programs). Although the technique is effective, it requires a substantial amount of resources. A main reason is that the crawler encounters many pages on the web that are legitimate and needs to be filtered. In this paper, to characterize the nature of this rising threat, we present implementation of a web crawler on Python, an approach to search the web more efficiently for pages that are likely to be malicious, filtering benign pages and passing remaining pages to antivirus program for detection of malwares. Our approaches starts from an initial seed of known, malicious web pages. Using these seeds, our system generates search engines queries to identify other malicious pages that are similar to the ones in the initial seed. By doing so, it leverages the crawling infrastructure of search engines to retrieve URLs that are much more likely to be malicious than a random page on the web. The results shows that this guided approach is able to identify malicious web pages more efficiently when compared to random crawling-based approaches.Keywords: web crawler, malwares, seeds, drive-by-downloads, security
Procedia PDF Downloads 229683 Human Factors Integration of Chemical, Biological, Radiological and Nuclear Response: Systems and Technologies
Authors: Graham Hancox, Saydia Razak, Sue Hignett, Jo Barnes, Jyri Silmari, Florian Kading
Abstract:
In the event of a Chemical, Biological, Radiological and Nuclear (CBRN) incident rapidly gaining, situational awareness is of paramount importance and advanced technologies have an important role to play in improving detection, identification, monitoring (DIM) and patient tracking. Understanding how these advanced technologies can fit into current response systems is essential to ensure they are optimally designed, usable and meet end-users’ needs. For this reason, Human Factors (Ergonomics) methods have been used within an EU Horizon 2020 project (TOXI-Triage) to firstly describe (map) the hierarchical structure in a CBRN response with adapted Accident Map (AcciMap) methodology. Secondly, Hierarchical Task Analysis (HTA) has been used to describe and review the sequence of steps (sub-tasks) in a CBRN scenario response as a task system. HTA methodology was then used to map one advanced technology, ‘Tag and Trace’, which tags an element (people, sample and equipment) with a Near Field Communication (NFC) chip in the Hot Zone to allow tracing of (monitoring), for example casualty progress through the response. This HTA mapping of the Tag and Trace system showed how the provider envisaged the technology being used, allowing for review and fit with the current CBRN response systems. These methodologies have been found to be very effective in promoting and supporting a dialogue between end-users and technology providers. The Human Factors methods have given clear diagrammatic (visual) representations of how providers see their technology being used and how end users would actually use it in the field; allowing for a more user centered approach to the design process. For CBRN events usability is critical as sub-optimum design of technology could add to a responders’ workload in what is already a chaotic, ambiguous and safety critical environment.Keywords: AcciMap, CBRN, ergonomics, hierarchical task analysis, human factors
Procedia PDF Downloads 222682 Stochastic Pi Calculus in Financial Markets: An Alternate Approach to High Frequency Trading
Authors: Jerome Joshi
Abstract:
The paper presents the modelling of financial markets using the Stochastic Pi Calculus model. The Stochastic Pi Calculus model is mainly used for biological applications; however, the feature of this model promotes its use in financial markets, more prominently in high frequency trading. The trading system can be broadly classified into exchange, market makers or intermediary traders and fundamental traders. The exchange is where the action of the trade is executed, and the two types of traders act as market participants in the exchange. High frequency trading, with its complex networks and numerous market participants (intermediary and fundamental traders) poses a difficulty while modelling. It involves the participants to seek the advantage of complex trading algorithms and high execution speeds to carry out large volumes of trades. To earn profits from each trade, the trader must be at the top of the order book quite frequently by executing or processing multiple trades simultaneously. This would require highly automated systems as well as the right sentiment to outperform other traders. However, always being at the top of the book is also not best for the trader, since it was the reason for the outbreak of the ‘Hot – Potato Effect,’ which in turn demands for a better and more efficient model. The characteristics of the model should be such that it should be flexible and have diverse applications. Therefore, a model which has its application in a similar field characterized by such difficulty should be chosen. It should also be flexible in its simulation so that it can be further extended and adapted for future research as well as be equipped with certain tools so that it can be perfectly used in the field of finance. In this case, the Stochastic Pi Calculus model seems to be an ideal fit for financial applications, owing to its expertise in the field of biology. It is an extension of the original Pi Calculus model and acts as a solution and an alternative to the previously flawed algorithm, provided the application of this model is further extended. This model would focus on solving the problem which led to the ‘Flash Crash’ which is the ‘Hot –Potato Effect.’ The model consists of small sub-systems, which can be integrated to form a large system. It is designed in way such that the behavior of ‘noise traders’ is considered as a random process or noise in the system. While modelling, to get a better understanding of the problem, a broader picture is taken into consideration with the trader, the system, and the market participants. The paper goes on to explain trading in exchanges, types of traders, high frequency trading, ‘Flash Crash,’ ‘Hot-Potato Effect,’ evaluation of orders and time delay in further detail. For the future, there is a need to focus on the calibration of the module so that they would interact perfectly with other modules. This model, with its application extended, would provide a basis for researchers for further research in the field of finance and computing.Keywords: concurrent computing, high frequency trading, financial markets, stochastic pi calculus
Procedia PDF Downloads 77681 Monitoring Deforestation Using Remote Sensing And GIS
Authors: Tejaswi Agarwal, Amritansh Agarwal
Abstract:
Forest ecosystem plays very important role in the global carbon cycle. It stores about 80% of all above ground and 40% of all below ground terrestrial organic carbon. There is much interest in the extent of tropical forests and their rates of deforestation for two reasons: greenhouse gas contributions and the impact of profoundly negative biodiversity. Deforestation has many ecological, social and economic consequences, one of which is the loss of biological diversity. The rapid deployment of remote sensing (RS) satellites and development of RS analysis techniques in the past three decades have provided a reliable, effective, and practical way to characterize terrestrial ecosystem properties. Global estimates of tropical deforestation vary widely and range from 50,000 to 170,000km2 /yr Recent FAO tropical deforestation estimates for 1990–1995 cite 116,756km2 / yr globally. Remote Sensing can prove to be a very useful tool in monitoring of forests and associated deforestation to a sufficient level of accuracy without the need of physically surveying the forest areas as many of them are physically inaccessible. The methodology for the assessment of forest cover using digital image processing (ERDAS) has been followed. The satellite data for the study was procured from Indian institute of remote Sensing (IIRS), Dehradoon in the digital format. While procuring the satellite data, care was taken to ensure that the data was cloud free and did not belong to dry and leafless season. The Normalized Difference Vegetation Index (NDVI) has been used as a numerical indicator of the reduction in ground biomass. NDVI = (near I.R - Red)/ (near I.R + Red). After calculating the NDVI variations and associated mean, we have analysed the change in ground biomass. Through this paper, we have tried to indicate the rate of deforestation over a given period of time by comparing the forest cover at different time intervals. With the help of remote sensing and GIS techniques, it is clearly shown that the total forest cover is continuously degrading and transforming into various land use/land cover category.Keywords: remote sensing, deforestation, supervised classification, NDVI, change detection
Procedia PDF Downloads 1203680 Extraction of Forest Plantation Resources in Selected Forest of San Manuel, Pangasinan, Philippines Using LiDAR Data for Forest Status Assessment
Authors: Mark Joseph Quinto, Roan Beronilla, Guiller Damian, Eliza Camaso, Ronaldo Alberto
Abstract:
Forest inventories are essential to assess the composition, structure and distribution of forest vegetation that can be used as baseline information for management decisions. Classical forest inventory is labor intensive and time-consuming and sometimes even dangerous. The use of Light Detection and Ranging (LiDAR) in forest inventory would improve and overcome these restrictions. This study was conducted to determine the possibility of using LiDAR derived data in extracting high accuracy forest biophysical parameters and as a non-destructive method for forest status analysis of San Manual, Pangasinan. Forest resources extraction was carried out using LAS tools, GIS, Envi and .bat scripts with the available LiDAR data. The process includes the generation of derivatives such as Digital Terrain Model (DTM), Canopy Height Model (CHM) and Canopy Cover Model (CCM) in .bat scripts followed by the generation of 17 composite bands to be used in the extraction of forest classification covers using ENVI 4.8 and GIS software. The Diameter in Breast Height (DBH), Above Ground Biomass (AGB) and Carbon Stock (CS) were estimated for each classified forest cover and Tree Count Extraction was carried out using GIS. Subsequently, field validation was conducted for accuracy assessment. Results showed that the forest of San Manuel has 73% Forest Cover, which is relatively much higher as compared to the 10% canopy cover requirement. On the extracted canopy height, 80% of the tree’s height ranges from 12 m to 17 m. CS of the three forest covers based on the AGB were: 20819.59 kg/20x20 m for closed broadleaf, 8609.82 kg/20x20 m for broadleaf plantation and 15545.57 kg/20x20m for open broadleaf. Average tree counts for the tree forest plantation was 413 trees/ha. As such, the forest of San Manuel has high percent forest cover and high CS.Keywords: carbon stock, forest inventory, LiDAR, tree count
Procedia PDF Downloads 388679 The Potential Involvement of Platelet Indices in Insulin Resistance in Morbid Obese Children
Authors: Orkide Donma, Mustafa M. Donma
Abstract:
Association between insulin resistance (IR) and hematological parameters has long been a matter of interest. Within this context, body mass index (BMI), red blood cells, white blood cells and platelets were involved in this discussion. Parameters related to platelets associated with IR may be useful indicators for the identification of IR. Platelet indices such as mean platelet volume (MPV), platelet distribution width (PDW) and plateletcrit (PCT) are being questioned for their possible association with IR. The aim of this study was to investigate the association between platelet (PLT) count as well as PLT indices and the surrogate indices used to determine IR in morbid obese (MO) children. A total of 167 children participated in the study. Three groups were constituted. The number of cases was 34, 97 and 36 children in the normal BMI, MO and metabolic syndrome (MetS) groups, respectively. Sex- and age-dependent BMI-based percentile tables prepared by World Health Organization were used for the definition of morbid obesity. MetS criteria were determined. BMI values, homeostatic model assessment for IR (HOMA-IR), alanine transaminase-to-aspartate transaminase ratio (ALT/AST) and diagnostic obesity notation model assessment laboratory (DONMA-lab) index values were computed. PLT count and indices were analyzed using automated hematology analyzer. Data were collected for statistical analysis using SPSS for Windows. Arithmetic mean and standard deviation were calculated. Mean values of PLT-related parameters in both control and study groups were compared by one-way ANOVA followed by Tukey post hoc tests to determine whether a significant difference exists among the groups. The correlation analyses between PLT as well as IR indices were performed. Statistically significant difference was accepted as p-value < 0.05. Increased values were detected for PLT (p < 0.01) and PCT (p > 0.05) in MO group compared to those observed in children with N-BMI. Significant increases for PLT (p < 0.01) and PCT (p < 0.05) were observed in MetS group in comparison with the values obtained in children with N-BMI (p < 0.01). Significantly lower MPV and PDW values were obtained in MO group compared to the control group (p < 0.01). HOMA-IR (p < 0.05), DONMA-lab index (p < 0.001) and ALT/AST (p < 0.001) values in MO and MetS groups were significantly increased compared to the N-BMI group. On the other hand, DONMA-lab index values also differed between MO and MetS groups (p < 0.001). In the MO group, PLT was negatively correlated with MPV and PDW values. These correlations were not observed in the N-BMI group. None of the IR indices exhibited a correlation with PLT and PLT indices in the N-BMI group. HOMA-IR showed significant correlations both with PLT and PCT in the MO group. All of the three IR indices were well-correlated with each other in all groups. These findings point out the missing link between IR and PLT activation. In conclusion, PLT and PCT may be related to IR in addition to their identities as hemostasis markers during morbid obesity. Our findings have suggested that DONMA-lab index appears as the best surrogate marker for IR due to its discriminative feature between morbid obesity and MetS.Keywords: children, insulin resistance, metabolic syndrome, plateletcrit, platelet indices
Procedia PDF Downloads 106678 NDVI as a Measure of Change in Forest Biomass
Authors: Amritansh Agarwal, Tejaswi Agarwal
Abstract:
Forest ecosystem plays very important role in the global carbon cycle. It stores about 80% of all above ground and 40% of all below ground terrestrial organic carbon. There is much interest in the extent of tropical forests and their rates of deforestation for two reasons: greenhouse gas contributions and the impact of profoundly negative biodiversity. Deforestation has many ecological, social and economic consequences, one of which is the loss of biological diversity. The rapid deployment of remote sensing (RS) satellites and development of RS analysis techniques in the past three decades have provided a reliable, effective, and practical way to characterize terrestrial ecosystem properties. Global estimates of tropical deforestation vary widely and range from 50,000 to 170,000 km2 /yr Recent FAO tropical deforestation estimates for 1990–1995 cite 116,756km2 / yr globally. Remote Sensing can prove to be a very useful tool in monitoring of forests and associated deforestation to a sufficient level of accuracy without the need of physically surveying the forest areas as many of them are physically inaccessible. The methodology for the assessment of forest cover using digital image processing (ERDAS) has been followed. The satellite data for the study was procured from USGS website in the digital format. While procuring the satellite data, care was taken to ensure that the data was cloud and aerosol free by making using of FLAASH atmospheric correction technique. The Normalized Difference Vegetation Index (NDVI) has been used as a numerical indicator of the reduction in ground biomass. NDVI = (near I.R - Red)/ (near I.R + Red). After calculating the NDVI variations and associated mean we have analysed the change in ground biomass. Through this paper we have tried to indicate the rate of deforestation over a given period of time by comparing the forest cover at different time intervals. With the help of remote sensing and GIS techniques it is clearly shows that the total forest cover is continuously degrading and transforming into various land use/land cover category.Keywords: remote sensing, deforestation, supervised classification, NDVI change detection
Procedia PDF Downloads 402677 Corpus Linguistics as a Tool for Translation Studies Analysis: A Bilingual Parallel Corpus of Students’ Translations
Authors: Juan-Pedro Rica-Peromingo
Abstract:
Nowadays, corpus linguistics has become a key research methodology for Translation Studies, which broadens the scope of cross-linguistic studies. In the case of the study presented here, the approach used focuses on learners with little or no experience to study, at an early stage, general mistakes and errors, the correct or incorrect use of translation strategies, and to improve the translational competence of the students. Led by Sylviane Granger and Marie-Aude Lefer of the Centre for English Corpus Linguistics of the University of Louvain, the MUST corpus (MUltilingual Student Translation Corpus) is an international project which brings together partners from Europe and worldwide universities and connects Learner Corpus Research (LCR) and Translation Studies (TS). It aims to build a corpus of translations carried out by students including both direct (L2 > L1) an indirect (L1 > L2) translations, from a great variety of text types, genres, and registers in a wide variety of languages: audiovisual translations (including dubbing, subtitling for hearing population and for deaf population), scientific, humanistic, literary, economic and legal translation texts. This paper focuses on the work carried out by the Spanish team from the Complutense University (UCMA), which is part of the MUST project, and it describes the specific features of the corpus built by its members. All the texts used by UCMA are either direct or indirect translations between English and Spanish. Students’ profiles comprise translation trainees, foreign language students with a major in English, engineers studying EFL and MA students, all of them with different English levels (from B1 to C1); for some of the students, this would be their first experience with translation. The MUST corpus is searchable via Hypal4MUST, a web-based interface developed by Adam Obrusnik from Masaryk University (Czech Republic), which includes a translation-oriented annotation system (TAS). A distinctive feature of the interface is that it allows source texts and target texts to be aligned, so we can be able to observe and compare in detail both language structures and study translation strategies used by students. The initial data obtained point out the kind of difficulties encountered by the students and reveal the most frequent strategies implemented by the learners according to their level of English, their translation experience and the text genres. We have also found common errors in the graduate and postgraduate university students’ translations: transfer errors, lexical errors, grammatical errors, text-specific translation errors, and cultural-related errors have been identified. Analyzing all these parameters will provide more material to bring better solutions to improve the quality of teaching and the translations produced by the students.Keywords: corpus studies, students’ corpus, the MUST corpus, translation studies
Procedia PDF Downloads 147676 Fast Prototyping of Precise, Flexible, Multiplexed, Printed Electrochemical Enzyme-Linked Immunosorbent Assay System for Point-of-Care Biomarker Quantification
Authors: Zahrasadat Hosseini, Jie Yuan
Abstract:
Point-of-care (POC) diagnostic devices based on lab-on-a-chip (LOC) technology have the potential to revolutionize medical diagnostics. However, the development of an ideal microfluidic system based on LOC technology for diagnostics purposes requires overcoming several obstacles, such as improving sensitivity, selectivity, portability, cost-effectiveness, and prototyping methods. While numerous studies have introduced technologies and systems that advance these criteria, existing systems still have limitations. Electrochemical enzyme-linked immunosorbent assay (e-ELISA) in a LOC device offers numerous advantages, including enhanced sensitivity, decreased turnaround time, minimized sample and analyte consumption, reduced cost, disposability, and suitability for miniaturization, integration, and multiplexing. In this study, we present a novel design and fabrication method for a microfluidic diagnostic platform that integrates screen-printed electrochemical carbon/silver chloride electrodes on flexible printed circuit boards with flexible, multilayer, polydimethylsiloxane (PDMS) microfluidic networks to accurately manipulate and pre-immobilize analytes for performing electrochemical enzyme-linked immunosorbent assay (e-ELISA) for multiplexed quantification of blood serum biomarkers. We further demonstrate fast, cost-effective prototyping, as well as accurate and reliable detection performance of this device for quantification of interleukin-6-spiked samples through electrochemical analytics methods. We anticipate that our invention represents a significant step towards the development of user-friendly, portable, medical-grade, POC diagnostic devices.Keywords: lab-on-a-chip, point-of-care diagnostics, electrochemical ELISA, biomarker quantification, fast prototyping
Procedia PDF Downloads 83675 Fast Prototyping of Precise, Flexible, Multiplexed, Printed Electrochemical Enzyme-Linked Immunosorbent Assay Platform for Point-of-Care Biomarker Quantification
Authors: Zahrasadat Hosseini, Jie Yuan
Abstract:
Point-of-care (POC) diagnostic devices based on lab-on-a-chip (LOC) technology have the potential to revolutionize medical diagnostics. However, the development of an ideal microfluidic system based on LOC technology for diagnostics purposes requires overcoming several obstacles, such as improving sensitivity, selectivity, portability, cost-effectiveness, and prototyping methods. While numerous studies have introduced technologies and systems that advance these criteria, existing systems still have limitations. Electrochemical enzyme-linked immunosorbent assay (e-ELISA) in a LOC device offers numerous advantages, including enhanced sensitivity, decreased turnaround time, minimized sample and analyte consumption, reduced cost, disposability, and suitability for miniaturization, integration, and multiplexing. In this study, we present a novel design and fabrication method for a microfluidic diagnostic platform that integrates screen-printed electrochemical carbon/silver chloride electrodes on flexible printed circuit boards with flexible, multilayer, polydimethylsiloxane (PDMS) microfluidic networks to accurately manipulate and pre-immobilize analytes for performing electrochemical enzyme-linked immunosorbent assay (e-ELISA) for multiplexed quantification of blood serum biomarkers. We further demonstrate fast, cost-effective prototyping, as well as accurate and reliable detection performance of this device for quantification of interleukin-6-spiked samples through electrochemical analytics methods. We anticipate that our invention represents a significant step towards the development of user-friendly, portable, medical-grade POC diagnostic devices.Keywords: lab-on-a-chip, point-of-care diagnostics, electrochemical ELISA, biomarker quantification, fast prototyping
Procedia PDF Downloads 85674 High-Pressure Polymorphism of 4,4-Bipyridine Hydrobromide
Authors: Michalina Aniola, Andrzej Katrusiak
Abstract:
4,4-Bipyridine is an important compound often used in chemical practice and more recently frequently applied for designing new metal organic framework (MoFs). Here we present a systematic high-pressure study of its hydrobromide salt. 4,4-Bipyridine hydrobromide monohydrate, 44biPyHBrH₂O, at ambient-pressure is orthorhombic, space group P212121 (phase a). Its hydrostatic compression shows that it is stable to 1.32 GPa at least. However, the recrystallization above 0.55 GPa reveals a new hidden b-phase (monoclinic, P21/c). Moreover, when the 44biPyHBrH2O is heated to high temperature the chemical reactions of this compound in methanol solution can be observed. High-pressure experiments were performed using a Merrill-Bassett diamond-anvil cell (DAC), modified by mounting the anvils directly on the steel supports, and X-ray diffraction measurements were carried out on a KUMA and Excalibur diffractometer equipped with an EOS CCD detector. At elevated pressure, the crystal of 44biPyHBrH₂O exhibits several striking and unexpected features. No signs of instability of phase a were detected to 1.32 GPa, while phase b becomes stable at above 0.55 GPa, as evidenced by its recrystallizations. Phases a and b of 44biPyHBrH2O are partly isostructural: their unit-cell dimensions and the arrangement of ions and water molecules are similar. In phase b the HOH-Br- chains double the frequency of their zigzag motifs, compared to phase a, and the 44biPyH+ cations change their conformation. Like in all monosalts of 44biPy determined so far, in phase a the pyridine rings are twisted by about 30 degrees about bond C4-C4 and in phase b they assume energy-unfavorable planar conformation. Another unusual feature of 44biPyHBrH2O is that all unit-cell parameters become longer on the transition from phase a to phase b. Thus the volume drop on the transition to high-pressure phase b totally depends on the shear strain of the lattice. Higher temperature triggers chemical reactions of 44biPyHBrH2O with methanol. When the saturated methanol solution compound precipitated at 0.1 GPa and temperature of 423 K was required to dissolve all the sample, the subsequent slow recrystallization at isochoric conditions resulted in disalt 4,4-bipyridinium dibromide. For the 44biPyHBrH2O sample sealed in the DAC at 0.35 GPa, then dissolved at isochoric conditions at 473 K and recrystallized by slow controlled cooling, a reaction of N,N-dimethylation took place. It is characteristic that in both high-pressure reactions of 44biPyHBrH₂O the unsolvated disalt products were formed and that free base 44biPy and H₂O remained in the solution. The observed reactions indicate that high pressure destabilized ambient-pressure salts and favors new products. Further studies on pressure-induced reactions are carried out in order to better understand the structural preferences induced by pressure.Keywords: conformation, high-pressure, negative area compressibility, polymorphism
Procedia PDF Downloads 246673 Salt Tolerance of Potato: Genetically Engineered with Atriplex canescens BADH Gene Driven by 3 Copies of CAMV35s Promoter
Authors: Arfan Ali, Muhammad Shahzad Iqbal, Idrees Ahmad Nasir
Abstract:
Potato (Solanum tuberosum L.) is ranked among the top leading staple foods in the world. Salinity adversely affects potato crop yield and quality. Therefore, increased level of salt tolerance is a key factor to ensure high yield. The present study focused on the Agrobacterium-mediated transformation of Atriplex canescens betaine aldehyde dehydrogenase (BADH) gene, using single, double and triple CAMV35s promoter to improve salt tolerance in potato. Detection of seven potato lines harboring BADH gene, followed by identification of T-DNA insertions, determination of transgenes copies no through Southern Hybridization and quantification of BADH protein through Enzyme Linked Immunosorbent Assay were considered in this study. The results clearly depict that the salt tolerance of potato was found to be promoter-dependent, as the potato transgenic lines with triple promoter showed 4.4 times more glycine betaine production which consequently leads towards high resistance to salt stress as compared to transgenic potato lines with single and double promoters having least production of glycine betaine. Moreover, triple promoter transgenic potato lines have also shown lower levels of H2O2, malondialdehyde (MDA), relative electrical conductivity, high proline and chlorophyll content as compared other two lines having a single and double promoter. Insilco analysis also confirmed that Atriplex canescens BADH has the tendency to interact with sodium ions and water molecules. Taken together these facts it can be concluded that over-expression of BADH under triple CAMV35s promoter with more glycine betaine, chlorophyll & MDA contents, high relative quantities of other metabolites results in an enhanced level of salt tolerance in potato.Keywords: Atriplex canescens, BADH, CAMV35s promotor, potato, Solanum tubersum
Procedia PDF Downloads 277672 Identifying Protein-Coding and Non-Coding Regions in Transcriptomes
Authors: Angela U. Makolo
Abstract:
Protein-coding and Non-coding regions determine the biology of a sequenced transcriptome. Research advances have shown that Non-coding regions are important in disease progression and clinical diagnosis. Existing bioinformatics tools have been targeted towards Protein-coding regions alone. Therefore, there are challenges associated with gaining biological insights from transcriptome sequence data. These tools are also limited to computationally intensive sequence alignment, which is inadequate and less accurate to identify both Protein-coding and Non-coding regions. Alignment-free techniques can overcome the limitation of identifying both regions. Therefore, this study was designed to develop an efficient sequence alignment-free model for identifying both Protein-coding and Non-coding regions in sequenced transcriptomes. Feature grouping and randomization procedures were applied to the input transcriptomes (37,503 data points). Successive iterations were carried out to compute the gradient vector that converged the developed Protein-coding and Non-coding Region Identifier (PNRI) model to the approximate coefficient vector. The logistic regression algorithm was used with a sigmoid activation function. A parameter vector was estimated for every sample in 37,503 data points in a bid to reduce the generalization error and cost. Maximum Likelihood Estimation (MLE) was used for parameter estimation by taking the log-likelihood of six features and combining them into a summation function. Dynamic thresholding was used to classify the Protein-coding and Non-coding regions, and the Receiver Operating Characteristic (ROC) curve was determined. The generalization performance of PNRI was determined in terms of F1 score, accuracy, sensitivity, and specificity. The average generalization performance of PNRI was determined using a benchmark of multi-species organisms. The generalization error for identifying Protein-coding and Non-coding regions decreased from 0.514 to 0.508 and to 0.378, respectively, after three iterations. The cost (difference between the predicted and the actual outcome) also decreased from 1.446 to 0.842 and to 0.718, respectively, for the first, second and third iterations. The iterations terminated at the 390th epoch, having an error of 0.036 and a cost of 0.316. The computed elements of the parameter vector that maximized the objective function were 0.043, 0.519, 0.715, 0.878, 1.157, and 2.575. The PNRI gave an ROC of 0.97, indicating an improved predictive ability. The PNRI identified both Protein-coding and Non-coding regions with an F1 score of 0.970, accuracy (0.969), sensitivity (0.966), and specificity of 0.973. Using 13 non-human multi-species model organisms, the average generalization performance of the traditional method was 74.4%, while that of the developed model was 85.2%, thereby making the developed model better in the identification of Protein-coding and Non-coding regions in transcriptomes. The developed Protein-coding and Non-coding region identifier model efficiently identified the Protein-coding and Non-coding transcriptomic regions. It could be used in genome annotation and in the analysis of transcriptomes.Keywords: sequence alignment-free model, dynamic thresholding classification, input randomization, genome annotation
Procedia PDF Downloads 68671 Analyzing the Changing Pattern of Nigerian Vegetation Zones and Its Ecological and Socio-Economic Implications Using Spot-Vegetation Sensor
Authors: B. L. Gadiga
Abstract:
This study assesses the major ecological zones in Nigeria with the view to understanding the spatial pattern of vegetation zones and the implications on conservation within the period of sixteen (16) years. Satellite images used for this study were acquired from the SPOT-VEGETATION between 1998 and 2013. The annual NDVI images selected for this study were derived from SPOT-4 sensor and were acquired within the same season (November) in order to reduce differences in spectral reflectance due to seasonal variations. The images were sliced into five classes based on literatures and knowledge of the area (i.e. <0.16 Non-Vegetated areas; 0.16-0.22 Sahel Savannah; 0.22-0.40 Sudan Savannah, 0.40-0.47 Guinea Savannah and >0.47 Forest Zone). Classification of the 1998 and 2013 images into forested and non forested areas showed that forested area decrease from 511,691 km2 in 1998 to 478,360 km2 in 2013. Differencing change detection method was performed on 1998 and 2013 NDVI images to identify areas of ecological concern. The result shows that areas undergoing vegetation degradation covers an area of 73,062 km2 while areas witnessing some form restoration cover an area of 86,315 km2. The result also shows that there is a weak correlation between rainfall and the vegetation zones. The non-vegetated areas have a correlation coefficient (r) of 0.0088, Sahel Savannah belt 0.1988, Sudan Savannah belt -0.3343, Guinea Savannah belt 0.0328 and Forest belt 0.2635. The low correlation can be associated with the encroachment of the Sudan Savannah belt into the forest belt of South-eastern part of the country as revealed by the image analysis. The degradation of the forest vegetation is therefore responsible for the serious erosion problems witnessed in the South-east. The study recommends constant monitoring of vegetation and strict enforcement of environmental laws in the country.Keywords: vegetation, NDVI, SPOT-vegetation, ecology, degradation
Procedia PDF Downloads 221670 Status of Alien Invasive Trees on the Grassland Plateau in Nyika National Park
Authors: Andrew Kanzunguze, Sopani Sichinga, Paston Simkoko, George Nxumayo, Cosmas, V. B. Dambo
Abstract:
Early detection of plant invasions is a necessary prerequisite for effective invasive plant management in protected areas. This study was conducted to determine the distribution and abundance of alien invasive trees in Nyika National Park (NNP). Data on species' presence and abundance were collected from belt transects (n=31) in a 100 square kilometer area on the central plateau. The data were tested for normality using the Shapiro-Wilk test; Mann-Whitney test was carried out to compare frequencies and abundances between the species, and geographical information systems were used for spatial analyses. Results revealed that Black Wattle (Acacia mearnsii), Mexican Pine (Pinus patula) and Himalayan Raspberry (Rubus ellipticus) were the main alien invasive trees on the plateau. A. mearnsii was localized in the areas where it was first introduced, whereas P. patula and R. ellipticus were spread out beyond original points of introduction. R. ellipticus occurred as dense, extensive (up to 50 meters) thickets on the margins of forest patches and pine stands, whilst P. patula trees were frequent in the valleys, occurring most densely (up to 39 stems per 100 square meters) south-west of Chelinda camp on the central plateau with high variation in tree heights. Additionally, there were no significant differences in abundance between R. ellipticus (48) and P. patula (48) in the study area (p > 0.05) It was concluded that R. ellipticus and P. patula require more attention as compared to A. mearnsii. Howbeit, further studies into the invasion ecology of both P. patula and R. ellipticus on the Nyika plateau are highly recommended so as to assess the threat posed by the species on biodiversity, and recommend appropriate conservation measures in the national park.Keywords: alien-invasive trees, Himalayan raspberry, Nyika National Park, Mexican pine
Procedia PDF Downloads 204669 Sequence Component-Based Adaptive Protection for Microgrids Connected Power Systems
Authors: Isabelle Snyder
Abstract:
Microgrid protection presents challenges to conventional protection techniques due to the low induced fault current. Protection relays present in microgrid applications require a combination of settings groups to adjust based on the architecture of the microgrid in islanded and grid-connected mode. In a radial system where the microgrid is at the other end of the feeder, directional elements can be used to identify the direction of the fault current and switch settings groups accordingly (grid connected or microgrid connected). However, with multiple microgrid connections, this concept becomes more challenging, and the direction of the current alone is not sufficient to identify the source of the fault current contribution. ORNL has previously developed adaptive relaying schemes through other DOE-funded research projects that will be evaluated and used as a baseline for this research. The four protection techniques in this study are the following: (1) Adaptive Current only Protection System (ACPS), Intentional (2) Unbalanced Control for Protection Control (IUCPC), (3) Adaptive Protection System with Communication Controller (APSCC) (4) Adaptive Model-Driven Protective Relay (AMDPR). The first two methods focus on identifying the islanded mode without communication by monitoring the current sequence component generated by the system (ACPS) or induced with inverter control during islanded mode (IUCPC) to identify the islanding condition without communication at the relay to adjust the settings. These two methods are used as a backup to the APSCC, which relies on a communication network to communicate the islanded configuration to the system components. The fourth method relies on a short circuit model inside the relay that is used in conjunction with communication to adjust the system configuration and computes the fault current and adjusts the settings accordingly.Keywords: adaptive relaying, microgrid protection, sequence components, islanding detection, communication controlled protection, integrated short circuit model
Procedia PDF Downloads 95668 The Utilization of Manganese-Enhanced Magnetic Resonance Imaging in the Fields of Ophthalmology and Visual Neuroscience
Authors: Parisa Mansour
Abstract:
Understanding how vision works in both health and disease involves understanding the anatomy and physiology of the eye as well as the neural pathways involved in visual perception. The development of imaging techniques for the visual system is essential for understanding the neural foundation of visual function or impairment. MRI provides a way to examine neural circuit structure and function without invasive procedures, allowing for the detection of brain tissue abnormalities in real time. One of the advanced MRI methods is manganese-enhanced MRI (MEMRI), which utilizes active manganese contrast agents to enhance brain tissue signals in T1-weighted imaging, showcasing connectivity and activity levels. The way manganese ions build up in the eye, and visual pathways can be due to their spread throughout the body or by moving locally along axons in a forward direction and entering neurons through calcium channels that are voltage-gated. The paramagnetic manganese contrast is utilized in MRI for various applications in the visual system, such as imaging neurodevelopment and evaluating neurodegeneration, neuroplasticity, neuroprotection, and neuroregeneration. In this assessment, we outline four key areas of scientific research where MEMRI can play a crucial role - understanding brain structure, mapping nerve pathways, monitoring nerve cell function, and distinguishing between different types of glial cell activity. We discuss various studies that have utilized MEMRI to investigate the visual system, including delivery methods, spatiotemporal features, and biophysical analysis. Based on this literature, we have pinpointed key issues in the field related to toxicity, as well as sensitivity and specificity of manganese enhancement. We will also examine the drawbacks and other options to MEMRI that could offer new possibilities for future exploration.Keywords: glial activity, manganese-enhanced magnetic resonance imaging, neuroarchitecture, neuronal activity, neuronal tract tracing, visual pathway, eye
Procedia PDF Downloads 40667 Implementing of Indoor Air Quality Index in Hong Kong
Authors: Kwok W. Mui, Ling T. Wong, Tsz W. Tsang
Abstract:
Many Hong Kong people nowadays spend most of their lifetime working indoor. Since poor Indoor Air Quality (IAQ) potentially leads to discomfort, ill health, low productivity and even absenteeism in workplaces, a call for establishing statutory IAQ control to safeguard the well-being of residents is urgently required. Although policies, strategies, and guidelines for workplace IAQ diagnosis have been developed elsewhere and followed with remedial works, some of those workplaces or buildings have relatively late stage of the IAQ problems when the investigation or remedial work started. Screening for IAQ problems should be initiated as it will provide information as a minimum provision of IAQ baseline requisite to the resolution of the problems. It is not practical to sample all air pollutants that exit. Nevertheless, as a statutory control, reliable, rapid screening is essential in accordance with a compromise strategy, which balances costs against detection of key pollutants. This study investigates the feasibility of using an IAQ index as a parameter of IAQ control in Hong Kong. The index is a screening parameter to identify the unsatisfactory workplace IAQ and will highlight where a fully effective IAQ monitoring and assessment is needed for an intensive diagnosis. There already exist a number of representative common indoor pollutants based on some extensive IAQ assessments. The selection of pollutants is surrogate to IAQ control consists of dilution, mitigation, and emission control. The IAQ Index and assessment will look at high fractional quantities of these common measurement parameters. With the support of the existing comprehensive regional IAQ database and the IAQ Index by the research team as the pre-assessment probability, and the unsatisfactory IAQ prevalence as the post-assessment probability from this study, thresholds of maintaining the current measures and performing a further IAQ test or IAQ remedial measures will be proposed. With justified resources, the proposed IAQ Index and assessment protocol might be a useful tool for setting up a practical public IAQ surveillance programme and policy in Hong Kong.Keywords: assessment, index, indoor air quality, surveillance programme
Procedia PDF Downloads 267666 The Usage of Negative Emotive Words in Twitter
Authors: Martina Katalin Szabó, István Üveges
Abstract:
In this paper, the usage of negative emotive words is examined on the basis of a large Hungarian twitter-database via NLP methods. The data is analysed from a gender point of view, as well as changes in language usage over time. The term negative emotive word refers to those words that, on their own, without context, have semantic content that can be associated with negative emotion, but in particular cases, they may function as intensifiers (e.g. rohadt jó ’damn good’) or a sentiment expression with positive polarity despite their negative prior polarity (e.g. brutális, ahogy ez a férfi rajzol ’it’s awesome (lit. brutal) how this guy draws’. Based on the findings of several authors, the same phenomenon can be found in other languages, so it is probably a language-independent feature. For the recent analysis, 67783 tweets were collected: 37818 tweets (19580 tweets written by females and 18238 tweets written by males) in 2016 and 48344 (18379 tweets written by females and 29965 tweets written by males) in 2021. The goal of the research was to make up two datasets comparable from the viewpoint of semantic changes, as well as from gender specificities. An exhaustive lexicon of Hungarian negative emotive intensifiers was also compiled (containing 214 words). After basic preprocessing steps, tweets were processed by ‘magyarlanc’, a toolkit is written in JAVA for the linguistic processing of Hungarian texts. Then, the frequency and collocation features of all these words in our corpus were automatically analyzed (via the analysis of parts-of-speech and sentiment values of the co-occurring words). Finally, the results of all four subcorpora were compared. Here some of the main outcomes of our analyses are provided: There are almost four times fewer cases in the male corpus compared to the female corpus when the negative emotive intensifier modified a negative polarity word in the tweet (e.g., damn bad). At the same time, male authors used these intensifiers more frequently, modifying a positive polarity or a neutral word (e.g., damn good and damn big). Results also pointed out that, in contrast to female authors, male authors used these words much more frequently as a positive polarity word as well (e.g., brutális, ahogy ez a férfi rajzol ’it’s awesome (lit. brutal) how this guy draws’). We also observed that male authors use significantly fewer types of emotive intensifiers than female authors, and the frequency proportion of the words is more balanced in the female corpus. As for changes in language usage over time, some notable differences in the frequency and collocation features of the words examined were identified: some of the words collocate with more positive words in the 2nd subcorpora than in the 1st, which points to the semantic change of these words over time.Keywords: gender differences, negative emotive words, semantic changes over time, twitter
Procedia PDF Downloads 205665 Detection of Bcl2 Polymorphism in Patient with Hepatocellular carcinoma
Authors: Mohamed Abdel-Hamid, Olfat Gamil Shaker, Doha El-Sayed Ellakwa, Eman Fathy Abdel-Maksoud
Abstract:
Introduction: Despite advances in the knowledge of the molecular virology of hepatitis C virus (HCV), the mechanisms of hepatocellular injury in HCV infection are not completely understood. Hepatitis C viral infection (HCV) influences the susceptibility to apoptosis. This could lead to insufficient antiviral immune response and persistent viral infection. Aim of this study: was to examine whether BCL-2 gene polymorphism at codon 43 (+127G/A or Ala43Thr) has an impact on development of hepatocellular carcinoma caused by chronic hepatitis C Egyptian patients. Subjects and Methods: The study included three groups; group 1: composing of 30 patients with hepatocellular carcinoma (HCC), group 2 composing of 30 patients with HCV, group 3 composing of 30 healthy subjects matching the same age and socioeconomic status were taken as a control group. Gene polymorphism of BCL2 (Ala43Thr) were evaluated by PCR-RFLP technique and measured for all patients and controls. Results: The summed 43Thr genotype was more frequent and statistically significant in HCC patients as compared to control group. This genotype of BCL2 gene may inhibit the programmed cell death which leads to disturbance in tissue and cells homeostasis and reduction in immune regulation. This result leads to viral replication and HCV persistence. Moreover, virus produces variety of mechanisms to block genes participated in apoptosis. This mechanism proves that HCV patients who have 43Thr genotype are more susceptible to HCC. Conclusion: The data suggest for the first time that the BCL2 polymorphism is associated with the susceptibility to HCC in Egyptian populations and might be used as molecular markers for evaluating HCC risk. This study clearly demonstrated that Chronic HCV exhibit a deregulation of apoptosis with the disease progression. This provides an insight into the pathogenesis of chronic HCV infection, and may contribute to the therapy.Keywords: BCL2 gene, Hepatitis C Virus, Hepatocellular carcinoma, sensitivity, specificity, apoptosis
Procedia PDF Downloads 508664 Effects of Cacao Agroforestry and Landscape Composition on Farm Biodiversity and Household Dietary Diversity
Authors: Marlene Yu Lilin Wätzold, Wisnu Harto Adiwijoyo, Meike Wollni
Abstract:
Land-use conversion from tropical forests to cash crop production in the form of monocultures has drastic consequences for biodiversity. Meanwhile, high dependence on cash crop production is often associated with a decrease in other food crop production, thereby affecting household dietary diversity. Additionally, deforestation rates have been found to reduce households’ dietary diversity, as forests often offer various food sources. Agroforestry systems are seen as a potential solution to improve local biodiversity as well as provide a range of provisioning ecosystem services, such as timber and other food crops. While a number of studies have analyzed the effects of agroforestry on biodiversity, as well as household livelihood indicators, little is understood between potential trade-offs or synergies between the two. This interdisciplinary study aims to fill this gap by assessing cacao agroforestry’s role in enhancing local bird diversity, as well as farm household dietary diversity. Additionally, we will take a landscape perspective and investigate in what ways the landscape composition, such as the proximity to forests and forest patches, are able to contribute to the local bird diversity, as well as households’ dietary diversity. Our study will take place in two agro-ecological zones in Ghana, based on household surveys of 500 cacao farm households. Using a subsample of 120 cacao plots, we will assess the degree of shade tree diversity and density using drone flights and a computer vision tree detection algorithm. Bird density and diversity will be assessed using sound recordings that will be kept in the cacao plots for 24 hours. Landscape compositions will be assessed via remote sensing images. The results of our study are of high importance as they will allow us to understand the effects of agroforestry and landscape composition in improving simultaneous ecosystem services.Keywords: agroforestry, biodiversity, landscape composition, nutrition
Procedia PDF Downloads 113663 Design, Simulation and Fabrication of Electro-Magnetic Pulse Welding Coil and Initial Experimentation
Authors: Bharatkumar Doshi
Abstract:
Electro-Magnetic Pulse Welding (EMPW) is a solid state welding process carried out at almost room temperature, in which joining is enabled by high impact velocity deformation. In this process, high voltage capacitor’s stored energy is discharged in an EM coil resulting in a damped, sinusoidal current with an amplitude of several hundred kiloamperes. Due to these transient magnetic fields of few tens of Tesla near the coil is generated. As the conductive (tube) part is positioned in this area, an opposing eddy current is induced in this part. Consequently, high Lorentz forces act on the part, leading to acceleration away from the coil. In case of a tube, it gets compressed under forming velocities of more than 300 meters per second. After passing the joining gap it collides with the second metallic joining rod, leading to the formation of a jet under appropriate collision conditions. Due to the prevailing high pressure, metallurgical bonding takes place. A characteristic feature is the wavy interface resulting from the heavy plastic deformations. In the process, the formation of intermetallic compounds which might deteriorate the weld strength can be avoided, even for metals with dissimilar thermal properties. In order to optimize the process parameters like current, voltage, inductance, coil dimensions, workpiece dimensions, air gap, impact velocity, effective plastic strain, shear stress acting in the welding zone/impact zone etc. are very critical and important to establish. These process parameters could be determined by simulation using Finite Element Methods (FEM) in which electromagnetic –structural couple field analysis is performed. The feasibility of welding could thus be investigated by varying the parameters in the simulation using COMSOL. Simulation results shall be applied in performing the preliminary experiments of welding the different alloy steel tubes and/or alloy steel to other materials. The single turn coil (S.S.304) with field shaper (copper) has been designed and manufactured. The preliminary experiments are performed using existing EMPW facility available Institute for Plasma Research, Gandhinagar, India. The experiments are performed at 22kV charged into 64µF capacitor bank and the energy is discharged into single turn EM coil. Welding of axi-symetric components such as aluminum tube and rod has been proven experimentally using EMPW techniques. In this paper EM coil design, manufacturing, Electromagnetic-structural FEM simulation of Magnetic Pulse Welding and preliminary experiment results is reported.Keywords: COMSOL, EMPW, FEM, Lorentz force
Procedia PDF Downloads 184662 Empirical Analysis of Forensic Accounting Practices for Tackling Persistent Fraud and Financial Irregularities in the Nigerian Public Sector
Authors: Sani AbdulRahman Bala
Abstract:
This empirical study delves into the realm of forensic accounting practices within the Nigerian Public Sector, seeking to quantitatively analyze their efficacy in addressing the persistent challenges of fraud and financial irregularities. With a focus on empirical data, this research employs a robust methodology to assess the current state of fraud in the Nigerian Public Sector and evaluate the performance of existing forensic accounting measures. Through quantitative analyses, including statistical models and data-driven insights, the study aims to identify patterns, trends, and correlations associated with fraudulent activities. The research objectives include scrutinizing documented fraud cases, examining the effectiveness of established forensic accounting practices, and proposing data-driven strategies for enhancing fraud detection and prevention. Leveraging quantitative methodologies, the study seeks to measure the impact of technological advancements on forensic accounting accuracy and efficiency. Additionally, the research explores collaborative mechanisms among government agencies, regulatory bodies, and the private sector by quantifying the effects of information sharing on fraud prevention. The empirical findings from this study are expected to provide a nuanced understanding of the challenges and opportunities in combating fraud within the Nigerian Public Sector. The quantitative insights derived from real-world data will contribute to the refinement of forensic accounting strategies, ensuring their effectiveness in addressing the unique complexities of financial irregularities in the public sector. The study's outcomes aim to inform policymakers, practitioners, and stakeholders, fostering evidence-based decision-making and proactive measures for a more resilient and fraud-resistant financial governance system in Nigeria.Keywords: fraud, financial irregularities, nigerian public sector, quantitative investigation
Procedia PDF Downloads 62661 Performance Analysis of Double Gate FinFET at Sub-10NM Node
Authors: Suruchi Saini, Hitender Kumar Tyagi
Abstract:
With the rapid progress of the nanotechnology industry, it is becoming increasingly important to have compact semiconductor devices to function and offer the best results at various technology nodes. While performing the scaling of the device, several short-channel effects occur. To minimize these scaling limitations, some device architectures have been developed in the semiconductor industry. FinFET is one of the most promising structures. Also, the double-gate 2D Fin field effect transistor has the benefit of suppressing short channel effects (SCE) and functioning well for less than 14 nm technology nodes. In the present research, the MuGFET simulation tool is used to analyze and explain the electrical behaviour of a double-gate 2D Fin field effect transistor. The drift-diffusion and Poisson equations are solved self-consistently. Various models, such as Fermi-Dirac distribution, bandgap narrowing, carrier scattering, and concentration-dependent mobility models, are used for device simulation. The transfer and output characteristics of the double-gate 2D Fin field effect transistor are determined at 10 nm technology node. The performance parameters are extracted in terms of threshold voltage, trans-conductance, leakage current and current on-off ratio. In this paper, the device performance is analyzed at different structure parameters. The utilization of the Id-Vg curve is a robust technique that holds significant importance in the modeling of transistors, circuit design, optimization of performance, and quality control in electronic devices and integrated circuits for comprehending field-effect transistors. The FinFET structure is optimized to increase the current on-off ratio and transconductance. Through this analysis, the impact of different channel widths, source and drain lengths on the Id-Vg and transconductance is examined. Device performance was affected by the difficulty of maintaining effective gate control over the channel at decreasing feature sizes. For every set of simulations, the device's features are simulated at two different drain voltages, 50 mV and 0.7 V. In low-power and precision applications, the off-state current is a significant factor to consider. Therefore, it is crucial to minimize the off-state current to maximize circuit performance and efficiency. The findings demonstrate that the performance of the current on-off ratio is maximum with the channel width of 3 nm for a gate length of 10 nm, but there is no significant effect of source and drain length on the current on-off ratio. The transconductance value plays a pivotal role in various electronic applications and should be considered carefully. In this research, it is also concluded that the transconductance value of 340 S/m is achieved with the fin width of 3 nm at a gate length of 10 nm and 2380 S/m for the source and drain extension length of 5 nm, respectively.Keywords: current on-off ratio, FinFET, short-channel effects, transconductance
Procedia PDF Downloads 61