Search results for: Bal Deep Sharma
1813 Effect of Blast Loads on the Seismically Designed Reinforced Concrete Buildings
Authors: Jhuma Debnath, Hrishikesh Sharma
Abstract:
The work done here in this paper is dedicated to studying the effect of high blast explosives over the seismically designed buildings. Buildings are seismically designed in SAP 2000 software to simulate seismic designs of buildings using response spectrum method. Later these buildings have been studied applying blast loads with the same amount of the blast explosives. This involved varying the standoff distances of the buildings from the blast explosion. The study found out that, for a seismically designed building, the minimum standoff distance is to be at least 120m from the place of explosion for an average blast explosive weight of 20kg TNT. This has shown that the building does not fail due to this huge explosive weight of TNT but resists immediate collapse of the building. The results also show that the adverse effect of the column failure due to blasting is reduced to 73.75% from 22.5% due to the increase of the standoff distance from the blast loads. The maximum affected locations due to the blast loads are also detected in this study.Keywords: blast loads, seismically designed buildings, standoff distance, reinforced concrete buildings
Procedia PDF Downloads 2371812 Effect of Extracorporeal Shock Wave Therapy on Post Burn Scars
Authors: Mahmoud S. Zaghloul, Mohammed M. Khalaf, Wael N. Thabet, Haidy N. Asham
Abstract:
Background. Hypertrophic scarring is a difficult problem for burn patients, and scar management is an essential aspect of outpatient burn therapy. Post-burn pathologic scars involve functional and aesthetic limitations that have a dramatic influence on the patient’s quality of life. The aim was to investigate the use of extracorporeal shock wave therapy (ESWT), which targets the fibroblasts in scar tissue, as an effective modality for scar treatment in burn patients. Subjects and methods: forty patients with post-burn scars were assigned randomly into two equal groups; their ages ranged from 20-45 years. The study group received ESWT and traditional physical therapy program (deep friction massage, stretching exercises). The control group received traditional physical therapy program (deep friction massage, stretching exercises). All groups received two sessions per week for six successful weeks. The data were collected before and after the same period of treatment for both groups. Evaluation procedures were carried out to measure scar thickness using ultrasonography and Vancouver Scar Scale (VSS) was completed before and after treatment. Results: Post-treatment results showed that there was a significant improvement difference in scar thickness in both groups in favor of the study group. Percentage of improvement in scar thickness in the study group was 42.55%, while it was 12.15% in the control group. There was also a significant improvement difference between results obtained using VSS in both groups in favor of the study group. Conclusion: ESWT is effective in management of pathologic post burn scars.Keywords: extracorporeal shock wave therapy, post-burn scars, ultrasonography, Vancouver scar scale
Procedia PDF Downloads 2561811 Application of Deep Learning Algorithms in Agriculture: Early Detection of Crop Diseases
Authors: Manaranjan Pradhan, Shailaja Grover, U. Dinesh Kumar
Abstract:
Farming community in India, as well as other parts of the world, is one of the highly stressed communities due to reasons such as increasing input costs (cost of seeds, fertilizers, pesticide), droughts, reduced revenue leading to farmer suicides. Lack of integrated farm advisory system in India adds to the farmers problems. Farmers need right information during the early stages of crop’s lifecycle to prevent damage and loss in revenue. In this paper, we use deep learning techniques to develop an early warning system for detection of crop diseases using images taken by farmers using their smart phone. The research work leads to building a smart assistant using analytics and big data which could help the farmers with early diagnosis of the crop diseases and corrective actions. The classical approach for crop disease management has been to identify diseases at crop level. Recently, ImageNet Classification using the convolutional neural network (CNN) has been successfully used to identify diseases at individual plant level. Our model uses convolution filters, max pooling, dense layers and dropouts (to avoid overfitting). The models are built for binary classification (healthy or not healthy) and multi class classification (identifying which disease). Transfer learning is used to modify the weights of parameters learnt through ImageNet dataset and apply them on crop diseases, which reduces number of epochs to learn. One shot learning is used to learn from very few images, while data augmentation techniques are used to improve accuracy with images taken from farms by using techniques such as rotation, zoom, shift and blurred images. Models built using combination of these techniques are more robust for deploying in the real world. Our model is validated using tomato crop. In India, tomato is affected by 10 different diseases. Our model achieves an accuracy of more than 95% in correctly classifying the diseases. The main contribution of our research is to create a personal assistant for farmers for managing plant disease, although the model was validated using tomato crop, it can be easily extended to other crops. The advancement of technology in computing and availability of large data has made possible the success of deep learning applications in computer vision, natural language processing, image recognition, etc. With these robust models and huge smartphone penetration, feasibility of implementation of these models is high resulting in timely advise to the farmers and thus increasing the farmers' income and reducing the input costs.Keywords: analytics in agriculture, CNN, crop disease detection, data augmentation, image recognition, one shot learning, transfer learning
Procedia PDF Downloads 1201810 Bi-Criteria Objective Network Design Model for Multi Period Multi Product Green Supply Chain
Authors: Shahul Hamid Khan, S. Santhosh, Abhinav Kumar Sharma
Abstract:
Environmental performance along with social performance is becoming vital factors for industries to achieve global standards. With a good environmental policy global industries are differentiating them from their competitors. This paper concentrates on multi stage, multi product and multi period manufacturing network. Bi-objective mathematical models for total cost and total emission for the entire forward supply chain are considered. Here five different problems are considered by varying the number of suppliers, manufacturers, and environmental levels, for illustrating the taken mathematical model. GA, and Random search are used for finding the optimal solution. The input parameters of the optimal solution are used to find the tradeoff between the initial investment by the industry and the long term benefit of the environment.Keywords: closed loop supply chain, genetic algorithm, random search, green supply chain
Procedia PDF Downloads 5491809 Histopathological Alterations in Liver of Mice Exposed to Different Doses of Diclofenac Sodium
Authors: Deepak Mohan, Sushma Sharma
Abstract:
Diclofenac sodium, a member of the acetic acid family of non-steroidal anti-inflammatory drugs, is used to retard inflammation, arthritis pain and ankylosing spondylitis. The drug is known to cause severe injury in different tissues due to formation of reactive oxygen species. The present study is focused on the effect of different doses of diclofenac (4 mg/kg/body weight and 14 mg/kg/body weight on histoarchitecture of the liver from 7-28 days of the investigation. Diclofenac administration resulted in distorted hepatic degeneration and formation of wide areas in the form of sinusoidal gaps. Hepatic fibrosis noticed in different stages of investigation could be attributed to chronic inflammation and reactive oxygen species which results in deposition of extracellular matrix proteins. The abrupt degenerative changes observed during later stages of the experiment showed maximum damage to the liver, and there was enlargement of sinusoidal gaps accompanied by maximum necrosis in the tissues.Keywords: arthritis, diclofenac, histoarchitecture, sinusoidal
Procedia PDF Downloads 2711808 Enhancing the Efficiency of Organic Solar Cells Using Metallic Nanoparticles
Authors: Sankara Rao Gollu, Ramakant Sharma, G. Srinivas, Souvik Kundu, Dipti Gupta
Abstract:
In recent years, bulk heterojunction organic solar cells (BHJ OSCs) based on polymer–fullerene attracted a large research attention due to their numerous advantages such as light weight, easy processability, eco-friendly, low-cost, and capability for large area roll-to-roll manufacturing. BHJ OSCs usually suffer from insufficient light absorption due to restriction on keeping thin ( < 150 nm) photoactive layer because of small exciton diffusion length ( ~ 10 nm) and low charge carrier mobilities. It is thus highly desirable that light absorption as well as charge transport properties are enhanced by alternative methods so as to improve the device efficiency. In this work, therefore, we have focused on the strategy of incorporating metallic nanostructures in the active layer or charge transport layer to enhance the absorption and improve the charge transport.Keywords: organic solar cell, efficiency, bulk heterojunction, polymer-fullerene
Procedia PDF Downloads 3971807 Expression Level of Dehydration-Responsive Element Binding/DREB Gene of Some Local Corn Cultivars from Kisar Island-Maluku Indonesia Using Quantitative Real-Time PCR
Authors: Hermalina Sinay, Estri L. Arumingtyas
Abstract:
The research objective was to determine the expression level of dehydration responsive element binding/DREB gene of local corn cultivars from Kisar Island Maluku. The study design was a randomized block design with single factor consist of six local corn cultivars obtained from farmers in Kisar Island and one reference varieties wich has been released by the government as a drought-tolerant varieties and obtained from Cereal Crops Research Institute (ICERI) Maros South Sulawesi. Leaf samples were taken is the second leaf after the flag leaf at the 65 days after planting. Isolation of total RNA from leaf samples was carried out according to the protocols of the R & A-BlueTM Total RNA Extraction Kit and was used as a template for cDNA synthesis. The making of cDNA from total RNA was carried out according to the protocol of One-Step Reverse Transcriptase PCR Premix Kit. Real Time-PCR was performed on cDNA from reverse transcription followed the procedures of Real MODTM Green Real-Time PCR Master Mix Kit. Data obtained from the real time-PCR results were analyzed using relative quantification method based on the critical point / Cycle Threshold (CP / CT). The results of gene expression analysis of DREB gene showed that the expression level of the gene was highest obtained at Deep Yellow local corn cultivar, and the lowest one was obtained at the Rubby Brown Cob cultivar. It can be concluded that the expression level of DREB gene of Deep Yellow local corn cultivar was highest than other local corn cultivars and Srikandi variety as a reference variety.Keywords: expression, level, DREB gene, local corn cultivars, Kisar Island, Maluku
Procedia PDF Downloads 2991806 The Effect of Incorporation of Inulin as a Fat Replacer on the Quality of Milk Products Vis-À-Vis Ice Cream
Authors: Harish Kumar Sharma
Abstract:
The influence of different levels of inulin as a fat replacer on the quality of ice cream was investigated. The physicochemical, rheological and textural properties of control ice cream and ice cream prepared with inulin in different proportions were determined and correlated to the different parameters using Pearson correlation and Principle Component Analysis (PCA). Based on the overall acepectability, ice cream with 4% inulin was found best and was selected for preparation of ice cream with inulin:SPI in different proportions. Compared with control ice cream, Inulin:SPI showed different rheological properties, resulting in significantly higher apparent viscosities, consistency coefficient and greater deviations from Newtonian flow. In addition, both hardness and melting resistance significantly increased with increase in the SPI content in ice cream prepared with inulin: SPI. Also hardness value increased for inulin based ice cream compared to control ice cream but it melted significantly faster than the latter. Colour value significantly decreased in both the cases compared to the control sample. The deliberation shall focus on the effect of incorporation of inulin on the quality of ice-cream.Keywords: fat replacer, inulin, ice cream, viscosity, principal component analysis
Procedia PDF Downloads 3841805 A Constructed Wetland as a Reliable Method for Grey Wastewater Treatment in Rwanda
Authors: Hussein Bizimana, Osman Sönmez
Abstract:
Constructed wetlands are current the most widely recognized waste water treatment option, especially in developing countries where they have the potential for improving water quality and creating valuable wildlife habitat in ecosystem with treatment requirement relatively simple for operation and maintenance cost. Lack of grey waste water treatment facilities in Kigali İnstitute of Science and Technology in Rwanda, causes pollution in the surrounding localities of Rugunga sector, where already a problem of poor sanitation is found. In order to treat grey water produced at Kigali İnstitute of Science and Technology, with high BOD concentration, high nutrients concentration and high alkalinity; a Horizontal Sub-surface Flow pilot-scale constructed wetland was designed and can operate in Kigali İnstitute of Science and Technology. The study was carried out in a sedimentation tank of 5.5 m x 1.42 m x 1.2 m deep and a Horizontal Sub-surface constructed wetland of 4.5 m x 2.5 m x 1.42 m deep. The grey waste water flow rate of 2.5 m3/d flew through vegetated wetland and sandy pilot plant. The filter media consisted of 0.6 to 2 mm of coarse sand, 0.00003472 m/s of hydraulic conductivity and cattails (Typha latifolia spp) were used as plants species. The effluent flow rate of the plant is designed to be 1.5 m3/ day and the retention time will be 24 hrs. 72% to 79% of BOD, COD, and TSS removals are estimated to be achieved, while the nutrients (Nitrogen and Phosphate) removal is estimated to be in the range of 34% to 53%. Every effluent characteristic will meet exactly the Rwanda Utility Regulatory Agency guidelines primarily because the retention time allowed is enough to make the reduction of contaminants within effluent raw waste water. Treated water reuse system was developed where water will be used in the campus irrigation system again.Keywords: constructed wetlands, hydraulic conductivity, grey waste water, cattails
Procedia PDF Downloads 6101804 Falling and Rising of Solid Particles in Thermally Stratified Fluid
Authors: Govind Sharma, Bahni Ray
Abstract:
Ubiquitous nature of particle settling is governed by the presence of the surrounding fluid medium. Thermally stratified fluid alters the settling phenomenon of particles as well as their interactions. Direct numerical simulation (DNS) is carried out with an open-source library Immersed Boundary Adaptive Mesh Refinement (IBAMR) to quantify the fundamental mechanism based on Distributed Lagrangian Multiplier (DLM). The presence of background density gradient due to thermal stratification replaces the drafting-kissing-tumbling in a homogeneous fluid to drafting-kissing-separation behavior. Simulations are performed with a varying range of particle-fluid density ratios, and it is shown that the stratification effect on particle interactions varies with density ratio. It is observed that the combined role of buoyancy and inertia govern the physical mechanism of particle-particle interaction.Keywords: direct numerical simulation, distributed lagrangian multiplier, rigidity constraint, sedimentation, stratification
Procedia PDF Downloads 1371803 In situ Biodegradation of Endosulfan, Imidacloprid, and Carbendazim Using Indigenous Bacterial Cultures of Agriculture Fields of Uttarakhand, India
Authors: Geeta Negi, Pankaj, Anjana Srivastava, Anita Sharma
Abstract:
In the present study, the presence of endosulfan, imidacloprid, carbendazim, in the soil /vegetables/cereals and water samples was observed in agriculture fields of Uttarakhand. In view of biodegradation of these pesticides, nine bacterial isolates were recovered from the soil samples of the fields which tolerated endosulfan, imidacloprid, carbendazim from 100 to 200 µg/ml. Three bacterial consortia used for in vitro bioremediation experiments were three bacterial isolates for carbendazim, imidacloprid and endosulfan, respectively. Maximum degradation (87 and 83%) of α and β endosulfan respectively was observed in soil slurry by consortium. Degradation of Imidacloprid and carbendazim under similar conditions was 88.4 and 77.5% respectively. FT-IR analysis of biodegraded samples of pesticides in liquid media showed stretching of various bonds. GC-MS of biodegraded endosulfan sample in soil slurry showed the presence of non-toxic intermediates. A pot trial with Bacterial treatments lowered down the uptake of pesticides in onion plants.Keywords: biodegradation, carbendazim, consortium, endosulfan
Procedia PDF Downloads 3751802 Analysis of Scaling Effects on Analog/RF Performance of Nanowire Gate-All-Around MOSFET
Authors: Dheeraj Sharma, Santosh Kumar Vishvakarma
Abstract:
We present a detailed analysis of analog and radiofrequency (RF) performance with different gate lengths for nanowire cylindrical gate (CylG) gate-all-around (GAA) MOSFET. CylG GAA MOSFET not only suppresses the short channel effects (SCEs), it is also a good candidate for analog/RF device due to its high transconductance (gm) and high cutoff frequency (fT ). The presented work would be beneficial for a new generation of RF circuits and systems in a broad range of applications and operating frequency covering the RF spectrum. For this purpose, the analog/RF figures of merit for CylG GAA MOSFET is analyzed in terms of gate to source capacitance (Cgs), gate to drain capacitance (Cgd), transconductance generation factor gm = Id (where Id represents drain current), intrinsic gain, output resistance, fT, maximum frequency of oscillation (fmax) and gain bandwidth (GBW) product.Keywords: Gate-All-Around MOSFET, GAA, output resistance, transconductance generation factor, intrinsic gain, cutoff frequency, fT
Procedia PDF Downloads 3991801 A Deep Learning Approach to Real Time and Robust Vehicular Traffic Prediction
Authors: Bikis Muhammed, Sehra Sedigh Sarvestani, Ali R. Hurson, Lasanthi Gamage
Abstract:
Vehicular traffic events have overly complex spatial correlations and temporal interdependencies and are also influenced by environmental events such as weather conditions. To capture these spatial and temporal interdependencies and make more realistic vehicular traffic predictions, graph neural networks (GNN) based traffic prediction models have been extensively utilized due to their capability of capturing non-Euclidean spatial correlation very effectively. However, most of the already existing GNN-based traffic prediction models have some limitations during learning complex and dynamic spatial and temporal patterns due to the following missing factors. First, most GNN-based traffic prediction models have used static distance or sometimes haversine distance mechanisms between spatially separated traffic observations to estimate spatial correlation. Secondly, most GNN-based traffic prediction models have not incorporated environmental events that have a major impact on the normal traffic states. Finally, most of the GNN-based models did not use an attention mechanism to focus on only important traffic observations. The objective of this paper is to study and make real-time vehicular traffic predictions while incorporating the effect of weather conditions. To fill the previously mentioned gaps, our prediction model uses a real-time driving distance between sensors to build a distance matrix or spatial adjacency matrix and capture spatial correlation. In addition, our prediction model considers the effect of six types of weather conditions and has an attention mechanism in both spatial and temporal data aggregation. Our prediction model efficiently captures the spatial and temporal correlation between traffic events, and it relies on the graph attention network (GAT) and Bidirectional bidirectional long short-term memory (Bi-LSTM) plus attention layers and is called GAT-BILSTMA.Keywords: deep learning, real time prediction, GAT, Bi-LSTM, attention
Procedia PDF Downloads 731800 Economic Activities Associated with Extraction of Riverbed Materials in the Tinau River, Nepal
Authors: Khet Raj Dahal, Dhruva Dhital, Chhatra Mani Sharma
Abstract:
A study was conducted during 2012 to 2013 in the selected reach of Tinau River, Nepal. The main objective of the study was to quantify employment and income generation from the extraction of construction materials from the river. A 10 km stretch of the river was selected for the study. Sample survey with a semi-structured questionnaire and field observation were the main tools used during field investigation. Extraction of riverbed materials from the banks, beds and floodplain areas of the river has provided many kinds of job opportunities for the people living in the vicinity of the river. It has also generated an adequate amount of revenues. The collected revenue has been invested for many kinds of social and infrastructures development for years. Though extraction of riverbed materials is beneficial for income and employment generation, it has also negative environmental impacts in and around the river. Furthermore, the study concluded that river bed extraction should be continued with special monitoring and evaluation in the areas where there is still room for extraction.Keywords: extraction, crusher plants, economic activities, Tinau River
Procedia PDF Downloads 6951799 Real-Time Big-Data Warehouse a Next-Generation Enterprise Data Warehouse and Analysis Framework
Authors: Abbas Raza Ali
Abstract:
Big Data technology is gradually becoming a dire need of large enterprises. These enterprises are generating massively large amount of off-line and streaming data in both structured and unstructured formats on daily basis. It is a challenging task to effectively extract useful insights from the large scale datasets, even though sometimes it becomes a technology constraint to manage transactional data history of more than a few months. This paper presents a framework to efficiently manage massively large and complex datasets. The framework has been tested on a communication service provider producing massively large complex streaming data in binary format. The communication industry is bound by the regulators to manage history of their subscribers’ call records where every action of a subscriber generates a record. Also, managing and analyzing transactional data allows service providers to better understand their customers’ behavior, for example, deep packet inspection requires transactional internet usage data to explain internet usage behaviour of the subscribers. However, current relational database systems limit service providers to only maintain history at semantic level which is aggregated at subscriber level. The framework addresses these challenges by leveraging Big Data technology which optimally manages and allows deep analysis of complex datasets. The framework has been applied to offload existing Intelligent Network Mediation and relational Data Warehouse of the service provider on Big Data. The service provider has 50+ million subscriber-base with yearly growth of 7-10%. The end-to-end process takes not more than 10 minutes which involves binary to ASCII decoding of call detail records, stitching of all the interrogations against a call (transformations) and aggregations of all the call records of a subscriber.Keywords: big data, communication service providers, enterprise data warehouse, stream computing, Telco IN Mediation
Procedia PDF Downloads 1761798 Road Accidents Bigdata Mining and Visualization Using Support Vector Machines
Authors: Usha Lokala, Srinivas Nowduri, Prabhakar K. Sharma
Abstract:
Useful information has been extracted from the road accident data in United Kingdom (UK), using data analytics method, for avoiding possible accidents in rural and urban areas. This analysis make use of several methodologies such as data integration, support vector machines (SVM), correlation machines and multinomial goodness. The entire datasets have been imported from the traffic department of UK with due permission. The information extracted from these huge datasets forms a basis for several predictions, which in turn avoid unnecessary memory lapses. Since data is expected to grow continuously over a period of time, this work primarily proposes a new framework model which can be trained and adapt itself to new data and make accurate predictions. This work also throws some light on use of SVM’s methodology for text classifiers from the obtained traffic data. Finally, it emphasizes the uniqueness and adaptability of SVMs methodology appropriate for this kind of research work.Keywords: support vector mechanism (SVM), machine learning (ML), support vector machines (SVM), department of transportation (DFT)
Procedia PDF Downloads 2761797 Deep Learning Based on Image Decomposition for Restoration of Intrinsic Representation
Authors: Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Kensuke Nakamura, Dongeun Choi, Byung-Woo Hong
Abstract:
Artefacts are commonly encountered in the imaging process of clinical computed tomography (CT) where the artefact refers to any systematic discrepancy between the reconstructed observation and the true attenuation coefficient of the object. It is known that CT images are inherently more prone to artefacts due to its image formation process where a large number of independent detectors are involved, and they are assumed to yield consistent measurements. There are a number of different artefact types including noise, beam hardening, scatter, pseudo-enhancement, motion, helical, ring, and metal artefacts, which cause serious difficulties in reading images. Thus, it is desired to remove nuisance factors from the degraded image leaving the fundamental intrinsic information that can provide better interpretation of the anatomical and pathological characteristics. However, it is considered as a difficult task due to the high dimensionality and variability of data to be recovered, which naturally motivates the use of machine learning techniques. We propose an image restoration algorithm based on the deep neural network framework where the denoising auto-encoders are stacked building multiple layers. The denoising auto-encoder is a variant of a classical auto-encoder that takes an input data and maps it to a hidden representation through a deterministic mapping using a non-linear activation function. The latent representation is then mapped back into a reconstruction the size of which is the same as the size of the input data. The reconstruction error can be measured by the traditional squared error assuming the residual follows a normal distribution. In addition to the designed loss function, an effective regularization scheme using residual-driven dropout determined based on the gradient at each layer. The optimal weights are computed by the classical stochastic gradient descent algorithm combined with the back-propagation algorithm. In our algorithm, we initially decompose an input image into its intrinsic representation and the nuisance factors including artefacts based on the classical Total Variation problem that can be efficiently optimized by the convex optimization algorithm such as primal-dual method. The intrinsic forms of the input images are provided to the deep denosing auto-encoders with their original forms in the training phase. In the testing phase, a given image is first decomposed into the intrinsic form and then provided to the trained network to obtain its reconstruction. We apply our algorithm to the restoration of the corrupted CT images by the artefacts. It is shown that our algorithm improves the readability and enhances the anatomical and pathological properties of the object. The quantitative evaluation is performed in terms of the PSNR, and the qualitative evaluation provides significant improvement in reading images despite degrading artefacts. The experimental results indicate the potential of our algorithm as a prior solution to the image interpretation tasks in a variety of medical imaging applications. This work was supported by the MISP(Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by the IITP(Institute for Information and Communications Technology Promotion).Keywords: auto-encoder neural network, CT image artefact, deep learning, intrinsic image representation, noise reduction, total variation
Procedia PDF Downloads 1901796 Performance Analysis of Multichannel OCDMA-FSO Network under Different Pervasive Conditions
Authors: Saru Arora, Anurag Sharma, Harsukhpreet Singh
Abstract:
To meet the growing need of high data rate and bandwidth, various efforts has been made nowadays for the efficient communication systems. Optical Code Division Multiple Access over Free space optics communication system seems an effective role for providing transmission at high data rate with low bit error rate and low amount of multiple access interference. This paper demonstrates the OCDMA over FSO communication system up to the range of 7000 m at a data rate of 5 Gbps. Initially, the 8 user OCDMA-FSO system is simulated and pseudo orthogonal codes are used for encoding. Also, the simulative analysis of various performance parameters like power and core effective area that are having an effect on the Bit error rate (BER) of the system is carried out. The simulative analysis reveals that the length of the transmission is limited by the multi-access interference (MAI) effect which arises when the number of users increases in the system.Keywords: FSO, PSO, bit error rate (BER), opti system simulation, multiple access interference (MAI), q-factor
Procedia PDF Downloads 3661795 Arabic Light Word Analyser: Roles with Deep Learning Approach
Authors: Mohammed Abu Shquier
Abstract:
This paper introduces a word segmentation method using the novel BP-LSTM-CRF architecture for processing semantic output training. The objective of web morphological analysis tools is to link a formal morpho-syntactic description to a lemma, along with morpho-syntactic information, a vocalized form, a vocalized analysis with morpho-syntactic information, and a list of paradigms. A key objective is to continuously enhance the proposed system through an inductive learning approach that considers semantic influences. The system is currently under construction and development based on data-driven learning. To evaluate the tool, an experiment on homograph analysis was conducted. The tool also encompasses the assumption of deep binary segmentation hypotheses, the arbitrary choice of trigram or n-gram continuation probabilities, language limitations, and morphology for both Modern Standard Arabic (MSA) and Dialectal Arabic (DA), which provide justification for updating this system. Most Arabic word analysis systems are based on the phonotactic morpho-syntactic analysis of a word transmitted using lexical rules, which are mainly used in MENA language technology tools, without taking into account contextual or semantic morphological implications. Therefore, it is necessary to have an automatic analysis tool taking into account the word sense and not only the morpho-syntactic category. Moreover, they are also based on statistical/stochastic models. These stochastic models, such as HMMs, have shown their effectiveness in different NLP applications: part-of-speech tagging, machine translation, speech recognition, etc. As an extension, we focus on language modeling using Recurrent Neural Network (RNN); given that morphological analysis coverage was very low in dialectal Arabic, it is significantly important to investigate deeply how the dialect data influence the accuracy of these approaches by developing dialectal morphological processing tools to show that dialectal variability can support to improve analysis.Keywords: NLP, DL, ML, analyser, MSA, RNN, CNN
Procedia PDF Downloads 441794 Integrating Natural Language Processing (NLP) and Machine Learning in Lung Cancer Diagnosis
Authors: Mehrnaz Mostafavi
Abstract:
The assessment and categorization of incidental lung nodules present a considerable challenge in healthcare, often necessitating resource-intensive multiple computed tomography (CT) scans for growth confirmation. This research addresses this issue by introducing a distinct computational approach leveraging radiomics and deep-learning methods. However, understanding local services is essential before implementing these advancements. With diverse tracking methods in place, there is a need for efficient and accurate identification approaches, especially in the context of managing lung nodules alongside pre-existing cancer scenarios. This study explores the integration of text-based algorithms in medical data curation, indicating their efficacy in conjunction with machine learning and deep-learning models for identifying lung nodules. Combining medical images with text data has demonstrated superior data retrieval compared to using each modality independently. While deep learning and text analysis show potential in detecting previously missed nodules, challenges persist, such as increased false positives. The presented research introduces a Structured-Query-Language (SQL) algorithm designed for identifying pulmonary nodules in a tertiary cancer center, externally validated at another hospital. Leveraging natural language processing (NLP) and machine learning, the algorithm categorizes lung nodule reports based on sentence features, aiming to facilitate research and assess clinical pathways. The hypothesis posits that the algorithm can accurately identify lung nodule CT scans and predict concerning nodule features using machine-learning classifiers. Through a retrospective observational study spanning a decade, CT scan reports were collected, and an algorithm was developed to extract and classify data. Results underscore the complexity of lung nodule cohorts in cancer centers, emphasizing the importance of careful evaluation before assuming a metastatic origin. The SQL and NLP algorithms demonstrated high accuracy in identifying lung nodule sentences, indicating potential for local service evaluation and research dataset creation. Machine-learning models exhibited strong accuracy in predicting concerning changes in lung nodule scan reports. While limitations include variability in disease group attribution, the potential for correlation rather than causality in clinical findings, and the need for further external validation, the algorithm's accuracy and potential to support clinical decision-making and healthcare automation represent a significant stride in lung nodule management and research.Keywords: lung cancer diagnosis, structured-query-language (SQL), natural language processing (NLP), machine learning, CT scans
Procedia PDF Downloads 1031793 Image Segmentation with Deep Learning of Prostate Cancer Bone Metastases on Computed Tomography
Authors: Joseph M. Rich, Vinay A. Duddalwar, Assad A. Oberai
Abstract:
Prostate adenocarcinoma is the most common cancer in males, with osseous metastases as the commonest site of metastatic prostate carcinoma (mPC). Treatment monitoring is based on the evaluation and characterization of lesions on multiple imaging studies, including Computed Tomography (CT). Monitoring of the osseous disease burden, including follow-up of lesions and identification and characterization of new lesions, is a laborious task for radiologists. Deep learning algorithms are increasingly used to perform tasks such as identification and segmentation for osseous metastatic disease and provide accurate information regarding metastatic burden. Here, nnUNet was used to produce a model which can segment CT scan images of prostate adenocarcinoma vertebral bone metastatic lesions. nnUNet is an open-source Python package that adds optimizations to deep learning-based UNet architecture but has not been extensively combined with transfer learning techniques due to the absence of a readily available functionality of this method. The IRB-approved study data set includes imaging studies from patients with mPC who were enrolled in clinical trials at the University of Southern California (USC) Health Science Campus and Los Angeles County (LAC)/USC medical center. Manual segmentation of metastatic lesions was completed by an expert radiologist Dr. Vinay Duddalwar (20+ years in radiology and oncologic imaging), to serve as ground truths for the automated segmentation. Despite nnUNet’s success on some medical segmentation tasks, it only produced an average Dice Similarity Coefficient (DSC) of 0.31 on the USC dataset. DSC results fell in a bimodal distribution, with most scores falling either over 0.66 (reasonably accurate) or at 0 (no lesion detected). Applying more aggressive data augmentation techniques dropped the DSC to 0.15, and reducing the number of epochs reduced the DSC to below 0.1. Datasets have been identified for transfer learning, which involve balancing between size and similarity of the dataset. Identified datasets include the Pancreas data from the Medical Segmentation Decathlon, Pelvic Reference Data, and CT volumes with multiple organ segmentations (CT-ORG). Some of the challenges of producing an accurate model from the USC dataset include small dataset size (115 images), 2D data (as nnUNet generally performs better on 3D data), and the limited amount of public data capturing annotated CT images of bone lesions. Optimizations and improvements will be made by applying transfer learning and generative methods, including incorporating generative adversarial networks and diffusion models in order to augment the dataset. Performance with different libraries, including MONAI and custom architectures with Pytorch, will be compared. In the future, molecular correlations will be tracked with radiologic features for the purpose of multimodal composite biomarker identification. Once validated, these models will be incorporated into evaluation workflows to optimize radiologist evaluation. Our work demonstrates the challenges of applying automated image segmentation to small medical datasets and lays a foundation for techniques to improve performance. As machine learning models become increasingly incorporated into the workflow of radiologists, these findings will help improve the speed and accuracy of vertebral metastatic lesions detection.Keywords: deep learning, image segmentation, medicine, nnUNet, prostate carcinoma, radiomics
Procedia PDF Downloads 971792 Aire-Dependent Transcripts have Shortened 3’UTRs and Show Greater Stability by Evading Microrna-Mediated Repression
Authors: Clotilde Guyon, Nada Jmari, Yen-Chin Li, Jean Denoyel, Noriyuki Fujikado, Christophe Blanchet, David Root, Matthieu Giraud
Abstract:
Aire induces ectopic expression of a large repertoire of tissue-specific antigen (TSA) genes in thymic medullary epithelial cells (MECs), driving immunological self-tolerance in maturing T cells. Although important mechanisms of Aire-induced transcription have recently been disclosed through the identification and the study of Aire’s partners, the fine transcriptional functions underlied by a number of them and conferred to Aire are still unknown. Alternative cleavage and polyadenylation (APA) is an essential mRNA processing step regulated by the termination complex consisting of 85 proteins, 10 of them have been related to Aire. We evaluated APA in MECs in vivo by microarray analysis with mRNA-spanning probes and RNA deep sequencing. We uncovered the preference of Aire-dependent transcripts for short-3’UTR isoforms and for proximal poly(A) site selection marked by the increased binding of the cleavage factor Cstf-64. RNA interference of the 10 Aire-related proteins revealed that Clp1, a member of the core termination complex, exerts a profound effect on short 3’UTR isoform preference. Clp1 is also significantly upregulated in the MECs compared to 25 mouse tissues in which we found that TSA expression is associated with longer 3’UTR isoforms. Aire-dependent transcripts escape a global 3’UTR lengthening associated with MEC differentiation, thereby potentiating the repressive effect of microRNAs that are globally upregulated in mature MECs. Consistent with these findings, RNA deep sequencing of actinomycinD-treated MECs revealed the increased stability of short 3’UTR Aire-induced transcripts, resulting in TSA transcripts accumulation and contributing for their enrichment in the MECs.Keywords: Aire, central tolerance, miRNAs, transcription termination
Procedia PDF Downloads 3841791 Application of Advanced Remote Sensing Data in Mineral Exploration in the Vicinity of Heavy Dense Forest Cover Area of Jharkhand and Odisha State Mining Area
Authors: Hemant Kumar, R. N. K. Sharma, A. P. Krishna
Abstract:
The study has been carried out on the Saranda in Jharkhand and a part of Odisha state. Geospatial data of Hyperion, a remote sensing satellite, have been used. This study has used a wide variety of patterns related to image processing to enhance and extract the mining class of Fe and Mn ores.Landsat-8, OLI sensor data have also been used to correctly explore related minerals. In this way, various processes have been applied to increase the mineralogy class and comparative evaluation with related frequency done. The Hyperion dataset for hyperspectral remote sensing has been specifically verified as an effective tool for mineral or rock information extraction within the band range of shortwave infrared used. The abundant spatial and spectral information contained in hyperspectral images enables the differentiation of different objects of any object into targeted applications for exploration such as exploration detection, mining.Keywords: Hyperion, hyperspectral, sensor, Landsat-8
Procedia PDF Downloads 1251790 Robust and Dedicated Hybrid Cloud Approach for Secure Authorized Deduplication
Authors: Aishwarya Shekhar, Himanshu Sharma
Abstract:
Data deduplication is one of important data compression techniques for eliminating duplicate copies of repeating data, and has been widely used in cloud storage to reduce the amount of storage space and save bandwidth. In this process, duplicate data is expunged, leaving only one copy means single instance of the data to be accumulated. Though, indexing of each and every data is still maintained. Data deduplication is an approach for minimizing the part of storage space an organization required to retain its data. In most of the company, the storage systems carry identical copies of numerous pieces of data. Deduplication terminates these additional copies by saving just one copy of the data and exchanging the other copies with pointers that assist back to the primary copy. To ignore this duplication of the data and to preserve the confidentiality in the cloud here we are applying the concept of hybrid nature of cloud. A hybrid cloud is a fusion of minimally one public and private cloud. As a proof of concept, we implement a java code which provides security as well as removes all types of duplicated data from the cloud.Keywords: confidentiality, deduplication, data compression, hybridity of cloud
Procedia PDF Downloads 3841789 Characteristics and Challenges of Post-Burn Contractures in Adults and Children: A Descriptive Study
Authors: Hardisiswo Soedjana, Inne Caroline
Abstract:
Deep dermal or full thickness burns are inevitably lead to post-burn contractures. These contractures remain to be one of the most concerning late complications of burn injuries. Surgical management includes releasing the contracture followed by resurfacing the defect accompanied by post-operative rehabilitation. Optimal treatment of post-burn contractures depends on the characteristics of the contractures. This study is aimed to describe clinical characteristics, problems, and management of post-burn contractures in adults and children. A retrospective analysis was conducted from medical records of patients suffered from contractures after burn injuries admitted to Hasan Sadikin general hospital between January 2016 and January 2018. A total of 50 patients with post burn contractures were included in the study. There were 17 adults and 33 children. Most patients were male, whose age range within 15-59 years old and 5-9 years old. Educational background was mostly senior high school among adults, while there was only one third of children who have entered school. Etiology of burns was predominantly flame in adults (82.3%); whereas flame and scald were the leading cause of burn injury in children (11%). Based on anatomical regions, hands were the most common affected both in adults (35.2%) and children (48.5%). Contractures were identified in 6-12 months since the initial burns. Most post-burn hand contractures were resurfaced with full-thickness skin graft (FTSG) both in adults and children. There were 11 patients who presented with recurrent contracture after previous history of contracture release. Post-operative rehabilitation was conducted for all patients; however, it is important to highlight that it is still challenging to control splinting and exercise when patients are discharged and especially the compliance in children. In order to improve quality of life in patients with history of deep burn injuries, prevention of contractures should begin right after acute care has been established. Education for the importance of splinting and exercise should be administered as comprehensible as possible for adult patients and parents of pediatric patients.Keywords: burn, contracture, education, exercise, splinting
Procedia PDF Downloads 1301788 Tracking of Intramuscular Stem Cells by Magnetic Resonance Diffusion Weighted Imaging
Authors: Balakrishna Shetty
Abstract:
Introduction: Stem Cell Imaging is a challenging field since the advent of Stem Cell treatment in humans. Series of research on tagging and tracking the stem cells has not been very effective. The present study is an effort by the authors to track the stem cells injected into calf muscles by Magnetic Resonance Diffusion Weighted Imaging. Materials and methods: Stem Cell injection deep into the calf muscles of patients with peripheral vascular disease is one of the recent treatment modalities followed in our institution. 5 patients who underwent deep intramuscular injection of stem cells as treatment were included for this study. Pre and two hours Post injection MRI of bilateral calf regions was done using 1.5 T Philips Achieva, 16 channel system using 16 channel torso coils. Axial STIR, Axial Diffusion weighted images with b=0 and b=1000 values with back ground suppression (DWIBS sequence of Philips MR Imaging Systems) were obtained at 5 mm interval covering the entire calf. The invert images were obtained for better visualization. 120ml of autologous bone marrow derived stem cells were processed and enriched under c-GMP conditions and reduced to 40ml solution containing mixture of above stem cells. Approximately 40 to 50 injections, each containing 0.75ml of processed stem cells, was injected with marked grids over the calf region. Around 40 injections, each of 1ml normal saline, is injected into contralateral leg as control. Results: Significant Diffusion hyper intensity is noted at the site of injected stem cells. No hyper intensity noted before the injection and also in the control side where saline was injected conclusion: This is one of the earliest studies in literature showing diffusion hyper intensity in intramuscularly injected stem cells. The advantages and deficiencies in this study will be discussed during the presentation.Keywords: stem cells, imaging, DWI, peripheral vascular disease
Procedia PDF Downloads 741787 A Sector-Wise Study on Detecting Earnings Management in India
Authors: Raghuveer Kaur, Kartikay Sharma, Ashu Khanna
Abstract:
Earnings management has been present from times immemorial. The recent downfall of giant enterprises like Enron, Satyam and WorldCom has brought a lot of focus on the study and detection of earnings management. The present study is an attempt to study earnings management in one of the fastest emerging economy - India. The study makes an attempt to understand earnings management in different sectors of the economy. The paper first tests a hypothesis to check whether different sectors of India are engaged in earnings management or not. In the later section the paper aims to study the level of earnings management in 6 popular sectors of India: IT&BPO, Retail, Telecom, Biotech, Hotels and coffee. To measure earnings management two popular techniques of detecting earnings management has been employed: Modified Jones Model and Beniesh M Score. A total of 332 companies were studied. Publicly available data from Capitaline database has been used. The paper also classifies the top and bottom five performers on the basis of sales turnover in each sector and identifies whether they manage their earnings or not.Keywords: earnings management, India, modified Jones model, Beneish M score
Procedia PDF Downloads 5161786 Identification of Deposition Sequences of the Organic Content of Lower Albian-Cenomanian Age in Northern Tunisia: Correlation between Molecular and Stratigraphic Fossils
Authors: Tahani Hallek, Dhaou Akrout, Riadh Ahmadi, Mabrouk Montacer
Abstract:
The present work is an organic geochemical study of the Fahdene Formation outcrops at the Mahjouba region belonging to the Eastern part of the Kalaat Senan structure in northwestern Tunisia (the Kef-Tedjerouine area). The analytical study of the organic content of the samples collected, allowed us to point out that the Formation in question is characterized by an average to good oil potential. This fossilized organic matter has a mixed origin (type II and III), as indicated by the relatively high values of hydrogen index. This origin is confirmed by the C29 Steranes abundance and also by tricyclic terpanes C19/(C19+C23) and tetracyclic terpanes C24/(C24+C23) ratios, that suggest a marine environment of deposit with high plants contribution. We have demonstrated that the heterogeneity of organic matter between the marine aspect, confirmed by the presence of foraminifera, and the continental contribution, is the result of an episodic anomaly in relation to the sequential stratigraphy. Given that the study area is defined as an outer platform forming a transition zone between a stable continental domain to the south and a deep basin to the north, we have explained the continental contribution by successive forced regressions, having blocked the albian transgression, allowing the installation of the lowstand system tracts. This aspect is represented by the incised valleys filling, in direct contact with the pelagic and deep sea facies. Consequently, the Fahdene Formation, in the Kef-Tedjerouine area, consists of transgressive system tracts (TST) brutally truncated by extras of continental progradation; resulting in a mixed influence deposition having retained a heterogeneous organic material.Keywords: molecular geochemistry, biomarkers, forced regression, deposit environment, mixed origin, Northern Tunisia
Procedia PDF Downloads 2501785 Transport Related Air Pollution Modeling Using Artificial Neural Network
Authors: K. D. Sharma, M. Parida, S. S. Jain, Anju Saini, V. K. Katiyar
Abstract:
Air quality models form one of the most important components of an urban air quality management plan. Various statistical modeling techniques (regression, multiple regression and time series analysis) have been used to predict air pollution concentrations in the urban environment. These models calculate pollution concentrations due to observed traffic, meteorological and pollution data after an appropriate relationship has been obtained empirically between these parameters. Artificial neural network (ANN) is increasingly used as an alternative tool for modeling the pollutants from vehicular traffic particularly in urban areas. In the present paper, an attempt has been made to model traffic air pollution, specifically CO concentration using neural networks. In case of CO concentration, two scenarios were considered. First, with only classified traffic volume input and the second with both classified traffic volume and meteorological variables. The results showed that CO concentration can be predicted with good accuracy using artificial neural network (ANN).Keywords: air quality management, artificial neural network, meteorological variables, statistical modeling
Procedia PDF Downloads 5251784 A Framework of Dynamic Rule Selection Method for Dynamic Flexible Job Shop Problem by Reinforcement Learning Method
Authors: Rui Wu
Abstract:
In the volatile modern manufacturing environment, new orders randomly occur at any time, while the pre-emptive methods are infeasible. This leads to a real-time scheduling method that can produce a reasonably good schedule quickly. The dynamic Flexible Job Shop problem is an NP-hard scheduling problem that hybrid the dynamic Job Shop problem with the Parallel Machine problem. A Flexible Job Shop contains different work centres. Each work centre contains parallel machines that can process certain operations. Many algorithms, such as genetic algorithms or simulated annealing, have been proposed to solve the static Flexible Job Shop problems. However, the time efficiency of these methods is low, and these methods are not feasible in a dynamic scheduling problem. Therefore, a dynamic rule selection scheduling system based on the reinforcement learning method is proposed in this research, in which the dynamic Flexible Job Shop problem is divided into several parallel machine problems to decrease the complexity of the dynamic Flexible Job Shop problem. Firstly, the features of jobs, machines, work centres, and flexible job shops are selected to describe the status of the dynamic Flexible Job Shop problem at each decision point in each work centre. Secondly, a framework of reinforcement learning algorithm using a double-layer deep Q-learning network is applied to select proper composite dispatching rules based on the status of each work centre. Then, based on the selected composite dispatching rule, an available operation is selected from the waiting buffer and assigned to an available machine in each work centre. Finally, the proposed algorithm will be compared with well-known dispatching rules on objectives of mean tardiness, mean flow time, mean waiting time, or mean percentage of waiting time in the real-time Flexible Job Shop problem. The result of the simulations proved that the proposed framework has reasonable performance and time efficiency.Keywords: dynamic scheduling problem, flexible job shop, dispatching rules, deep reinforcement learning
Procedia PDF Downloads 108