Search results for: Bal Deep Sharma
1632 Intertextuality as a Dialogue Between Postmodern Writer J. Fowles and Mid-English Writer J. Donne
Authors: Isahakyan Heghine
Abstract:
Intertextuality, being in the centre of attention of both linguists and literary critics, is vividly expressed in the outstanding British novelist and philosopher J. Fowles' works. 'The Magus’ is a deep psychological and philosophical novel with vivid intertextual links with the Greek mythology and authors from different epochs. The aim of the paper is to show how intertextuality might serve as a dialogue between two authors (J. Fowles and J. Donne) disguised in the dialogue of two protagonists of the novel : Conchis and Nicholas. Contrastive viewpoints concerning man's isolation, loneliness are stated in the dialogue. Due to the conceptual analysis of the text it becomes possible both to decode the conceptual information of the text and find out its intertextual links.Keywords: dialogue, conceptual analysis, isolation, intertextuality
Procedia PDF Downloads 3301631 Deep-Learning Coupled with Pragmatic Categorization Method to Classify the Urban Environment of the Developing World
Authors: Qianwei Cheng, A. K. M. Mahbubur Rahman, Anis Sarker, Abu Bakar Siddik Nayem, Ovi Paul, Amin Ahsan Ali, M. Ashraful Amin, Ryosuke Shibasaki, Moinul Zaber
Abstract:
Thomas Friedman, in his famous book, argued that the world in this 21st century is flat and will continue to be flatter. This is attributed to rapid globalization and the interdependence of humanity that engendered tremendous in-flow of human migration towards the urban spaces. In order to keep the urban environment sustainable, policy makers need to plan based on extensive analysis of the urban environment. With the advent of high definition satellite images, high resolution data, computational methods such as deep neural network analysis, and hardware capable of high-speed analysis; urban planning is seeing a paradigm shift. Legacy data on urban environments are now being complemented with high-volume, high-frequency data. However, the first step of understanding urban space lies in useful categorization of the space that is usable for data collection, analysis, and visualization. In this paper, we propose a pragmatic categorization method that is readily usable for machine analysis and show applicability of the methodology on a developing world setting. Categorization to plan sustainable urban spaces should encompass the buildings and their surroundings. However, the state-of-the-art is mostly dominated by classification of building structures, building types, etc. and largely represents the developed world. Hence, these methods and models are not sufficient for developing countries such as Bangladesh, where the surrounding environment is crucial for the categorization. Moreover, these categorizations propose small-scale classifications, which give limited information, have poor scalability and are slow to compute in real time. Our proposed method is divided into two steps-categorization and automation. We categorize the urban area in terms of informal and formal spaces and take the surrounding environment into account. 50 km × 50 km Google Earth image of Dhaka, Bangladesh was visually annotated and categorized by an expert and consequently a map was drawn. The categorization is based broadly on two dimensions-the state of urbanization and the architectural form of urban environment. Consequently, the urban space is divided into four categories: 1) highly informal area; 2) moderately informal area; 3) moderately formal area; and 4) highly formal area. In total, sixteen sub-categories were identified. For semantic segmentation and automatic categorization, Google’s DeeplabV3+ model was used. The model uses Atrous convolution operation to analyze different layers of texture and shape. This allows us to enlarge the field of view of the filters to incorporate larger context. Image encompassing 70% of the urban space was used to train the model, and the remaining 30% was used for testing and validation. The model is able to segment with 75% accuracy and 60% Mean Intersection over Union (mIoU). In this paper, we propose a pragmatic categorization method that is readily applicable for automatic use in both developing and developed world context. The method can be augmented for real-time socio-economic comparative analysis among cities. It can be an essential tool for the policy makers to plan future sustainable urban spaces.Keywords: semantic segmentation, urban environment, deep learning, urban building, classification
Procedia PDF Downloads 1921630 Trends of Cutaneous Melanoma in New Zealand: 2010 to 2020
Authors: Jack S. Pullman, Daniel Wen, Avinash Sharma, Bert Van Der Werf, Richard Martin
Abstract:
Background: New Zealand (NZ) melanoma incidence rates are amongst the highest in the world. Previous studies investigating the incidence of melanoma in NZ were performed for the periods 1995 – 1999 and 2000 – 2004 and suggested increasing melanoma incidence rates. Aim: The aim of the study is to provide an up-to-date review of trends in cutaneous melanoma in NZ from the New Zealand Cancer Registry (NZCR) 2010 – 2020. Methods: De-identified data were obtained from the NZCR, and relevant demographic and histopathologic information was extracted. Statistical analyses were conducted to calculate age-standardized incidence rates for invasive melanoma (IM) and melanoma in situ (MIS). Secondary results included Breslow thickness and melanoma subtype analysis. Results: There was a decline in the IM age-standardized incidence rate from 30.4 to 23.9 per 100,000 person-years between 2010 to 2020, alongside an increase in MIS incidence rate from 37.1 to 50.3 per 100,000 person-years. Men had a statistically significant higher IM incidence rate (p <0.001) and Breslow thickness (p <0.001) compared with women. Increased age was associated with a higher incidence of IM, presentation with melanoma of greater Breslow thickness and more advanced T stage. Conclusion: The incidence of IM in NZ has decreased in the last decade and was associated with an increase in MIS incidence over the same period. This can be explained due to earlier detection, dermoscopy, the maturity of prevention campaigns and/or a change in skin protection behavior.Keywords: melanoma, incidence, epidemiology, New Zealand
Procedia PDF Downloads 651629 Brain Age Prediction Based on Brain Magnetic Resonance Imaging by 3D Convolutional Neural Network
Authors: Leila Keshavarz Afshar, Hedieh Sajedi
Abstract:
Estimation of biological brain age from MR images is a topic that has been much addressed in recent years due to the importance it attaches to early diagnosis of diseases such as Alzheimer's. In this paper, we use a 3D Convolutional Neural Network (CNN) to provide a method for estimating the biological age of the brain. The 3D-CNN model is trained by MRI data that has been normalized. In addition, to reduce computation while saving overall performance, some effectual slices are selected for age estimation. By this method, the biological age of individuals using selected normalized data was estimated with Mean Absolute Error (MAE) of 4.82 years.Keywords: brain age estimation, biological age, 3D-CNN, deep learning, T1-weighted image, SPM, preprocessing, MRI, canny, gray matter
Procedia PDF Downloads 1481628 The Interdisciplinary Synergy Between Computer Engineering and Mathematics
Authors: Mitat Uysal, Aynur Uysal
Abstract:
Computer engineering and mathematics share a deep and symbiotic relationship, with mathematics providing the foundational theories and models for computer engineering advancements. From algorithm development to optimization techniques, mathematics plays a pivotal role in solving complex computational problems. This paper explores key mathematical principles that underpin computer engineering, illustrating their significance through a case study that demonstrates the application of optimization techniques using Python code. The case study addresses the well-known vehicle routing problem (VRP), an extension of the traveling salesman problem (TSP), and solves it using a genetic algorithm.Keywords: VRP, TSP, genetic algorithm, computer engineering, optimization
Procedia PDF Downloads 151627 The Impact of Varying the Detector and Modulation Types on Inter Satellite Link (ISL) Realizing the Allowable High Data Rate
Authors: Asmaa Zaki M., Ahmed Abd El Aziz, Heba A. Fayed, Moustafa H. Aly
Abstract:
ISLs are the most popular choice for deep space communications because these links are attractive alternatives to present day microwave links. This paper explored the allowable high data rate in this link over different orbits, which is affected by variation in modulation scheme and detector type. Moreover, the objective of this paper is to optimize and analyze the performance of ISL in terms of Q-factor and Minimum Bit Error Rate (Min-BER) based on different detectors comprising some parameters.Keywords: free space optics (FSO), field of view (FOV), inter satellite link (ISL), optical wireless communication (OWC)
Procedia PDF Downloads 3991626 Investigating Nanocrystalline CaF2:Tm for Carbon Beam and Gamma Radiation Dosimetry
Authors: Kanika Sharma, Shaila Bahl, Birendra Singh, Pratik Kumar, S. P. Lochab, A. Pandey
Abstract:
In the present investigation, initially nano-particles of CaF2 were prepared by the chemical co-precipitation method and later the prepared salt was activated by thulium (0.1 mol%) using the combustion technique. The final product was characterized and confirmed by X-Ray diffraction (XRD) and transmission electron microscopy (TEM). Further, the thermoluminescence (TL) properties of the nanophosphor were studied by irradiating it with 1.25 MeV of gamma radiation and 65 MeV of carbon (C6+) ion beam. For gamma rays, two prominent TL peaks were observed with a low temperature peak at around 1070C and a high temperature peak at around 1570C. Furthermore, the nanophosphor maintained a linear TL response for the entire range of studied doses i.e. 10 Gy to 2000 Gy for both the temperature peaks. Moreover, when the nanophosphor was irradiated with 65 MeV of C6+ ion beam the shape and structure of the glow curves remained spectacularly similar and the nanophosphor displayed a linear TL response for the full range of studied fluences i.e. 5*1010 ions/cm2 to 1 *1012 ions/ cm2. Finally, various tests like reproducibility test and batch homogeneity were also carried out to define the final product. Thus, co-precipitation method followed by combustion technique was successful in effectively producing dosimetric grade CaF2:Tm for dosimetry of gamma as well as carbon (C6+) beam.Keywords: gamma radiation, ion beam, nanocrystalline, radiation dosimetry
Procedia PDF Downloads 1861625 DenseNet and Autoencoder Architecture for COVID-19 Chest X-Ray Image Classification and Improved U-Net Lung X-Ray Segmentation
Authors: Jonathan Gong
Abstract:
Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.Keywords: artificial intelligence, convolutional neural networks, deep learning, image processing, machine learning
Procedia PDF Downloads 1311624 Towards End-To-End Disease Prediction from Raw Metagenomic Data
Authors: Maxence Queyrel, Edi Prifti, Alexandre Templier, Jean-Daniel Zucker
Abstract:
Analysis of the human microbiome using metagenomic sequencing data has demonstrated high ability in discriminating various human diseases. Raw metagenomic sequencing data require multiple complex and computationally heavy bioinformatics steps prior to data analysis. Such data contain millions of short sequences read from the fragmented DNA sequences and stored as fastq files. Conventional processing pipelines consist in multiple steps including quality control, filtering, alignment of sequences against genomic catalogs (genes, species, taxonomic levels, functional pathways, etc.). These pipelines are complex to use, time consuming and rely on a large number of parameters that often provide variability and impact the estimation of the microbiome elements. Training Deep Neural Networks directly from raw sequencing data is a promising approach to bypass some of the challenges associated with mainstream bioinformatics pipelines. Most of these methods use the concept of word and sentence embeddings that create a meaningful and numerical representation of DNA sequences, while extracting features and reducing the dimensionality of the data. In this paper we present an end-to-end approach that classifies patients into disease groups directly from raw metagenomic reads: metagenome2vec. This approach is composed of four steps (i) generating a vocabulary of k-mers and learning their numerical embeddings; (ii) learning DNA sequence (read) embeddings; (iii) identifying the genome from which the sequence is most likely to come and (iv) training a multiple instance learning classifier which predicts the phenotype based on the vector representation of the raw data. An attention mechanism is applied in the network so that the model can be interpreted, assigning a weight to the influence of the prediction for each genome. Using two public real-life data-sets as well a simulated one, we demonstrated that this original approach reaches high performance, comparable with the state-of-the-art methods applied directly on processed data though mainstream bioinformatics workflows. These results are encouraging for this proof of concept work. We believe that with further dedication, the DNN models have the potential to surpass mainstream bioinformatics workflows in disease classification tasks.Keywords: deep learning, disease prediction, end-to-end machine learning, metagenomics, multiple instance learning, precision medicine
Procedia PDF Downloads 1261623 Stationary Gas Turbines in Power Generation: Past, Present and Future Challenges
Authors: Michel Moliere
Abstract:
In the next decades, the thermal power generation segment will survive only if it achieves deep mutations, including drastical abatements of CO2 emissions and strong efficiency gains. In this challenging perspective, stationary gas turbines appear as serious candidates to lead the energy transition. Indeed, during the past decades, these turbomachines have made brisk technological advances in terms of efficiency, reliability, fuel flex (including the combustion of hydrogen), and the ability to hybridize with regenrables. It is, therefore, timely to summarize the progresses achieved by gas turbines in the recent past and to examine what are their assets to face the challenges of the energy transition.Keywords: energy transition, gas turbines, decarbonization, power generation
Procedia PDF Downloads 2101622 Segmented Pupil Phasing with Deep Learning
Authors: Dumont Maxime, Correia Carlos, Sauvage Jean-François, Schwartz Noah, Gray Morgan
Abstract:
Context: The concept of the segmented telescope is unavoidable to build extremely large telescopes (ELT) in the quest for spatial resolution, but it also allows one to fit a large telescope within a reduced volume of space (JWST) or into an even smaller volume (Standard Cubesat). Cubesats have tight constraints on the computational burden available and the small payload volume allowed. At the same time, they undergo thermal gradients leading to large and evolving optical aberrations. The pupil segmentation comes nevertheless with an obvious difficulty: to co-phase the different segments. The CubeSat constraints prevent the use of a dedicated wavefront sensor (WFS), making the focal-plane images acquired by the science detector the most practical alternative. Yet, one of the challenges for the wavefront sensing is the non-linearity between the image intensity and the phase aberrations. Plus, for Earth observation, the object is unknown and unrepeatable. Recently, several studies have suggested Neural Networks (NN) for wavefront sensing; especially convolutional NN, which are well known for being non-linear and image-friendly problem solvers. Aims: We study in this paper the prospect of using NN to measure the phasing aberrations of a segmented pupil from the focal-plane image directly without a dedicated wavefront sensing. Methods: In our application, we take the case of a deployable telescope fitting in a CubeSat for Earth observations which triples the aperture size (compared to the 10cm CubeSat standard) and therefore triples the angular resolution capacity. In order to reach the diffraction-limited regime in the visible wavelength, typically, a wavefront error below lambda/50 is required. The telescope focal-plane detector, used for imaging, will be used as a wavefront-sensor. In this work, we study a point source, i.e. the Point Spread Function [PSF] of the optical system as an input of a VGG-net neural network, an architecture designed for image regression/classification. Results: This approach shows some promising results (about 2nm RMS, which is sub lambda/50 of residual WFE with 40-100nm RMS of input WFE) using a relatively fast computational time less than 30 ms which translates a small computation burder. These results allow one further study for higher aberrations and noise.Keywords: wavefront sensing, deep learning, deployable telescope, space telescope
Procedia PDF Downloads 1061621 Characterization Study of Aluminium 6061 Hybrid Composite
Authors: U. Achutha Kini, S. S. Sharma, K. Jagannath, P. R. Prabhu, M. C. Gowri Shankar
Abstract:
Aluminium matrix composites with alumina reinforcements give superior mechanical & physical properties. Their applications in several fields like automobile, aerospace, defense, sports, electronics, bio-medical and other industrial purposes are becoming essential for the last several decades. In the present work, fabrication of hybrid composite was done by Stir casting technique using Al 6061 as a matrix with alumina and silicon carbide (SiC) as reinforcement materials. The weight percentage of alumina is varied from 2 to 4% and the silicon carbide weight percentage is maintained constant at 2%. Hardness and wear tests are performed in the as cast and heat treated conditions. Age hardening treatment was performed on the specimen with solutionizing at 550°C, aging at two temperatures (150 and 200°C) for different time durations. Hardness distribution curves are drawn and peak hardness values are recorded. Hardness increase was very sensitive with respect to the decrease in aging temperature. There was an improvement in wear resistance of the peak aged material when aged at lower temperature. Also increase in weight percent of alumina, increases wear resistance at lower temperature but opposite behavior was seen when aged at higher temperature.Keywords: hybrid composite, hardness test, wear test, heat treatment, pin on disc wear testing machine
Procedia PDF Downloads 3211620 An Analysis of Innovative Cloud Model as Bridging the Gap between Physical and Virtualized Business Environments: The Customer Perspective
Authors: Asim Majeed, Rehan Bhana, Mak Sharma, Rebecca Goode, Nizam Bolia, Mike Lloyd-Williams
Abstract:
This study aims to investigate and explore the underlying causes of security concerns of customers emerged when WHSmith transformed its physical system to virtualized business model through NetSuite. NetSuite is essentially fully integrated software which helps transforming the physical system to virtualized business model. Modern organisations are moving away from traditional business models to cloud based models and consequently it is expected to have a better, secure and innovative environment for customers. The vital issue of the modern age race is the security when transforming virtualized through cloud based models and designers of interactive systems often misunderstand privacy and even often ignore it, thus causing concerns for users. The content analysis approach is being used to collect the qualitative data from 120 online bloggers including TRUSTPILOT. The results and finding provide useful new insights into the nature and form of security concerns of online users after they have used the WHSmith services offered online through their website. Findings have theoretical as well as practical implications for the successful adoption of cloud computing Business-to-Business model and similar systems.Keywords: innovation, virtualization, cloud computing, organizational flexibility
Procedia PDF Downloads 3841619 UF as Pretreatment of RO for Tertiary Treatment of Biologically Treated Distillery Spentwash
Authors: Pinki Sharma, Himanshu Joshi
Abstract:
Distillery spentwash contains high chemical oxygen demand (COD), biological oxygen demand (BOD), color, total dissolved solids (TDS) and other contaminants even after biological treatment. The effluent can’t be discharged as such in the surface water bodies or land without further treatment. Reverse osmosis (RO) treatment plants have been installed in many of the distilleries at tertiary level. But at most of the places these plants are not properly working due to high concentration of organic matter and other contaminants in biologically treated spentwash. To make the membrane treatment proven and reliable technology, proper pre-treatment is mandatory. In the present study, ultra-filtration (UF) as pre-treatment of RO at tertiary stage was performed. Operating parameters namely initial pH (pHo: 2–10), trans-membrane pressure (TMP: 4-20 bars) and temperature (T: 15- 43°C) used for conducting experiments with UF system. Experiments were optimized at different operating parameters in terms of COD, color, TDS and TOC removal by using response surface methodology (RSM) with central composite design. The results showed that removal of COD, color and TDS by 62%, 93.5% and 75.5%, with UF, respectively at optimized conditions with increased permeate flux from 17.5 l/m2/h (RO) to 38 l/m2/h (UF-RO). The performance of the RO system was greatly improved both in term of pollutant removal as well as water recovery.Keywords: bio-digested distillery spentwash, reverse osmosis, response surface methodology, ultra-filtration
Procedia PDF Downloads 3471618 Subsurface Water in Mars' Shallow Diluvium Deposits: Evidence from Tianwen-1 Radar Observations
Authors: Changzhi Jiang, Chunyu Ding, Yan Su, Jiawei Li, Ravi Sharma, Yuanzhou Liu, Jiangwan Xu
Abstract:
Early Mars is believed to have had extensive liquid water activity, which has now predominantly transitioned to a frozen state, with the majority of water stored in polar ice caps. It has long been deemed that the shallow subsurface of Mars' mid-to-low latitudes is devoid of liquid water. However, geological features observed at the Tianwen-1 landing site hint potential subsurface water. Our research indicates that the shallow subsurface at the Tianwen-1 landing site consists primarily of diluvium deposits containing liquid brine and brine ice, which exhibits diurnal thermal convection processes. Here we report the relationship between the loss tangent and temperature of materials within 5 meters depth of the subsurface at the Tianwen-1 landing site, as in-situ detected by high-frequency radar and climate station onboard the Zhurong rover. When the strata temperature exceeds ~ 240 K, the mixed brine ice transitions to liquid brine, significantly increasing the loss tangent from an average of ~ 0.0167 to a maximum of ~ 0.0448. This finding indicates the presence of substantial subsurface water in Mars' mid-to-low latitudes, influencing the shallow subsurface heat distribution and contributing to the current Martian hydrological cycle.Keywords: water on mars, mars exploration, in-situ radar detection, tianwen-1 mission
Procedia PDF Downloads 391617 Multibody Constrained Dynamics of Y-Method Installation System for a Large Scale Subsea Equipment
Authors: Naeem Ullah, Menglan Duan, Mac Darlington Uche Onuoha
Abstract:
The lowering of subsea equipment into the deep waters is a challenging job due to the harsh offshore environment. Many researchers have introduced various installation systems to deploy the payload safely into the deep oceans. In general practice, dual floating vessels are not employed owing to the prevalent safety risks and hazards caused by ever-increasing dynamical effects sourced by mutual interaction between the bodies. However, while keeping in the view of the optimal grounds, such as economical one, the Y-method, the two conventional tugboats supporting the equipment by the two independent strands connected to a tri-plate above the equipment, has been employed to study multibody dynamics of the dual barge lifting operations. In this study, the two tugboats and the suspended payload (Y-method) are deployed for the lowering of subsea equipment into the deep waters as a multibody dynamic system. The two-wire ropes are used for the lifting and installation operation by this Y-method installation system. 6-dof (degree of freedom) for each body are considered to establish coupled 18-dof multibody model by embedding technique or velocity transformation technique. The fundamental and prompt advantage of this technique is that the constraint forces can be eliminated directly, and no extra computational effort is required for the elimination of the constraint forces. The inertial frame of reference is taken at the surface of the water as the time-independent frame of reference, and the floating frames of reference are introduced in each body as the time-dependent frames of reference in order to formulate the velocity transformation matrix. The local transformation of the generalized coordinates to the inertial frame of reference is executed by applying the Euler Angle approach. The spherical joints are articulated amongst the multibody as the kinematic joints. The hydrodynamic force, the two-strand forces, the hydrostatic force, and the mooring forces are taken into consideration as the external forces. The radiation force of the hydrodynamic force is obtained by employing the Cummins equation. The wave exciting part of the hydrodynamic force is obtained by using force response amplitude operators (RAOs) that are obtained by the commercial solver ‘OpenFOAM’. The strand force is obtained by considering the wire rope as an elastic spring. The nonlinear hydrostatic force is obtained by the pressure integration technique at each time step of the wave movement. The mooring forces are evaluated by using Faltinsen analytical approach. ‘The Runge Kutta Method’ of Fourth-Order is employed to evaluate the coupled equations of motion obtained for 18-dof multibody model. The results are correlated with the simulated Orcaflex Model. Moreover, the results from Orcaflex Model are compared with the MOSES Model from previous studies. The MBDS of single barge lifting operation from the former studies are compared with the MBDS of the established dual barge lifting operation. The dynamics of the dual barge lifting operation are found larger in magnitude as compared to the single barge lifting operation. It is noticed that the traction at the top connection point of the cable decreases with the increase in the length, and it becomes almost constant after passing through the splash zone.Keywords: dual barge lifting operation, Y-method, multibody dynamics, shipbuilding, installation of subsea equipment, shipbuilding
Procedia PDF Downloads 2031616 An Architecture Based on Capsule Networks for the Identification of Handwritten Signature Forgery
Authors: Luisa Mesquita Oliveira Ribeiro, Alexei Manso Correa Machado
Abstract:
Handwritten signature is a unique form for recognizing an individual, used to discern documents, carry out investigations in the criminal, legal, banking areas and other applications. Signature verification is based on large amounts of biometric data, as they are simple and easy to acquire, among other characteristics. Given this scenario, signature forgery is a worldwide recurring problem and fast and precise techniques are needed to prevent crimes of this nature from occurring. This article carried out a study on the efficiency of the Capsule Network in analyzing and recognizing signatures. The chosen architecture achieved an accuracy of 98.11% and 80.15% for the CEDAR and GPDS databases, respectively.Keywords: biometrics, deep learning, handwriting, signature forgery
Procedia PDF Downloads 841615 Investigating Anti-bacterial and Anti-Covid-19 Virus Properties and Mode of Action of Mg(Oh)₂ and Copper-Infused Mg(Oh)₂ Nanoparticles on Coated Polypropylene Surfaces
Authors: Saleh Alkarri, Melinda Frame, Dimple Sharma, John Cairney, Lee Maddan, Jin H. Kim, Jonathan O. Rayner, Teresa M. Bergholz, Muhammad Rabnawaz
Abstract:
Reported herein is an investigation of anti-bacterial and anti-virus properties, mode of action of Mg(OH)₂ and copper-infused Mg(OH)₂ nanoplatelets (NPs) on melt-compounded and thermally embossed polypropylene (PP) surfaces. The anti-viral activity for the NPs was studied in aqueous liquid suspensions against SARS-CoV-2, and the mode of action was investigated on neat NPs and PP samples that were thermally embossed with NPs. Anti-bacterial studies for melt-compounded NPs in PP confirmed approximately 1 log reduction of E. coli populations in 24 h, while for thermally embossed NPs, an 8 log reduction of E. coli populations was observed. In addition, the NPs exhibit anti-viral activity against SARS-CoV-2. Fluorescence microscopy revealed that reactive oxygen species (ROS) is the main mode of action through which Mg(OH)₂ and Cu-Infused Mg(OH)₂act against microbes. Plastics with anti-microbial surfaces from where biocides are non-leachable are highly desirable. This work provides a general fabrication strategy for developing anti-microbial plastic surfaces.Keywords: anti-microbial activity, E. coli K-12 MG1655, anti-viral activity, SARS-CoV-2, copper-infused magnesium hydroxide, non-leachable, ROS, compounding, surface embossing, dyes
Procedia PDF Downloads 671614 Discussion of Blackness in Wrestling
Authors: Jason Michael Crozier
Abstract:
The wrestling territories of the mid-twentieth century in the United States are widely considered the birthplace of modern professional wrestling, and by many professional wrestlers, to be a beacon of hope for the easing of racial tensions during the civil rights era and beyond. The performers writing on this period speak of racial equality but fail to acknowledge the exploitation of black athletes as a racialized capital commodity who suffered the challenges of systemic racism, codified by a false narrative of aspirational exceptionalism and equality measured by audience diversity. The promoters’ ability to equate racial and capital exploitation with equality leads to a broader discussion of the history of Muscular Christianity in the United States and the exploitation of black bodies. Narratives of racial erasure that dominate the historical discourse when examining athleticism and exceptionalism redefined how blackness existed and how physicality and race are conceived of in sport and entertainment spaces. When discussing the implications of race and professional wrestling, it is important to examine the role of promotions as ‘imagined communities’ where the social agency of wrestlers is defined and quantified based on their ‘desired elements’ as a performer. The intentionally vague nature of this language masks a deep history of racialization that has been perpetuated by promoters and never fully examined by scholars. Sympathetic racism and the omission of cultural identity are also key factors in the limitations and racial barriers placed upon black athletes in the squared circle. The use of sympathetic racism within professional wrestling during the twentieth century defined black athletes into two distinct categorizations, the ‘black savage’ or the ‘black minstrel’. Black wrestlers of the twentieth century were defined by their strength as a capital commodity and their physicality rather than their knowledge of the business and in-ring skill. These performers had little agency in their ability to shape their own character development inside and outside the ring. Promoters would often create personas that heavily racialized the performer by tying them to a regional past or memory, such as that of slavery in the deep south using dog collar matches and adoring black characters in chains. Promoters softened cultural memory by satirizing the historic legacy of slavery and the black identity.Keywords: sympathetic racism, social agency, racial commodification, stereotyping
Procedia PDF Downloads 1351613 A Framework for Improving Trade Contractors’ Productivity Tracking Methods
Authors: Sophia Hayes, Kenny L. Liang, Sahil Sharma, Austin Shema, Mahmoud Bader, Mohamed Elbarkouky
Abstract:
Despite being one of the most significant economic contributors of the country, Canada’s construction industry is lagging behind other sectors when it comes to labor productivity improvements. The construction industry is very collaborative as a general contractor, will hire trade contractors to perform most of a project’s work; meaning low productivity from one contractor can have a domino effect on the shared success of a project. To address this issue and encourage trade contractors to improve their productivity tracking methods, an investigative study was done on the productivity views and tracking methods of various trade contractors. Additionally, an in-depth review was done on four standard tracking methods used in the construction industry: cost codes, benchmarking, the job productivity measurement (JPM) standard, and WorkFace Planning (WFP). The four tracking methods were used as a baseline in comparing the trade contractors’ responses, determining gaps within their current tracking methods, and for making improvement recommendations. 15 interviews were conducted with different trades to analyze how contractors value productivity. The results of these analyses indicated that there seem to be gaps within the construction industry when it comes to an understanding of the purpose and value in productivity tracking. The trade contractors also shared their current productivity tracking systems; which were then compared to the four standard tracking methods used in the construction industry. Gaps were identified in their various tracking methods and using a framework; recommendations were made based on the type of trade on how to improve how they track productivity.Keywords: labor productivity, productivity tracking methods, trade contractors, construction
Procedia PDF Downloads 1941612 Seasonal Variation in Free Radical Scavenging Properties of Indian Moringa (Moringa Oleifera)
Authors: Awadhesh Kishore, Tushar Sharma
Abstract:
The goal of this study was to compare the free radical-scavenging (FRS) characteristics of four Indian moringa (Moringa oleifera) plant components: flowers, tender and mature leaves, and seeds that were collected from three Indian districts: Jaipur, Dehra Dun, and Gwalior; in every month of 2021–2022. The samples were collected from three randomly selected agroforest locations from each district. The samples were extracted, and antioxidant properties were determined following the DPPH method with minor modifications. The FRS properties were calculated as the non-absorbance values of the sample in percentage. The factorial ANOVA statistical analysis technique was implemented for comparing FRS properties, and an MS Office Excel 2016 analysis pack was used to compare data. The flowers from Dehra Dun had superior FRS properties (27.06±1.03%), while the seeds from the same location were inferior (8.64±0.17%). The FRS properties of flowers (26.27±0.61%) were not statistically different (P > 0.05) compared to those of tender (27.30±0.63%) and mature leaves (28.37±0.59%), but significantly higher (P < 0.05) than those of seeds (9.31±0.16%). However, the FRS properties in Indian moringa were significantly higher during the winter (Jan 28.67±1.48%) compared to that in the summer (Jun 14.03±0.79%) season, but collected from three locations, viz. Gwalior (22.35±0.70%), Jaipur (23.06±0.73%), and Dehra Dun (23.10±0.76%), were not significantly different (P > 0.05). Based on this study, it can be concluded that the FRS value of flowers during the winter season is superior.Keywords: flowers, free radical-scavenging, leaves, moringa oleifera, seeds
Procedia PDF Downloads 761611 Fusion Reactions at Low Bombarding Energies
Authors: Nitin Sharma, Rahbar Ali, Dharmendra Singh, R. P. Singh, S. Muralithar, M. Afzal Ansari
Abstract:
Heavy ion-induced reactions have gained significant attention in nuclear physics due to their potential to elucidate reaction mechanisms and explore practical applications. Hence, the present simulation work has been done with a projectile of ¹²C on ¹⁴²,¹⁴⁶Nd target at beam energy ranging from 4-7 MeV/nucleon. In the present work, measurement of excitation functions of evaporation residues produced via CF and/or ICF in the system ¹²C + ¹⁴²,¹⁴⁶Nd has been done. The evaporation residues ¹⁵⁰Dy (4n), ¹⁴⁹Dy (5n), and ¹⁴⁹Tb (p4n) are populated via xn/pxn emission channels and 147,146Gd (α3n/ α4n) via αxn emission channels in ¹²C + ¹⁴²,¹⁴⁶Nd system, confirmed by statistical model codes of PACE-4 and EMPIRE 3.2.2. And the evaporation residues ¹⁵⁴Dy (4n), ¹⁵³Dy (5n), and ¹⁵³Tb (p4n) are populated via xn/pxn emission channels and 150Gd (α4n) via αxn emission channels in ¹²C + ¹⁴⁶Nd system. The cross-sections of the above residues have been taken from PACE-4 and EMPIRE 3.2.2 and compared. Present work also suggests the production route for ¹⁴⁹Tb radioisotope via heavy-ion reactions. In the reaction ¹²C + ¹⁴²Nd, ¹⁴⁹Tb radioisotope has been produced, which is the only α-emitting radioisotope of Tb and is promising for targeted alpha therapy. Moreover, these reactions are important to understand the role of target deformation in fusion reactions above the Coulomb barrier as target ¹⁴²Nd is spherical and ¹⁴⁶Nd is deformed.Keywords: heavy-ion reactions, radioisotopes, nuclear physics, target deformation
Procedia PDF Downloads 71610 Exploring the Synergistic Effects of Aerobic Exercise and Cinnamon Extract on Metabolic Markers in Insulin-Resistant Rats through Advanced Machine Learning and Deep Learning Techniques
Authors: Masoomeh Alsadat Mirshafaei
Abstract:
The present study aims to explore the effect of an 8-week aerobic training regimen combined with cinnamon extract on serum irisin and leptin levels in insulin-resistant rats. Additionally, this research leverages various machine learning (ML) and deep learning (DL) algorithms to model the complex interdependencies between exercise, nutrition, and metabolic markers, offering a groundbreaking approach to obesity and diabetes research. Forty-eight Wistar rats were selected and randomly divided into four groups: control, training, cinnamon, and training cinnamon. The training protocol was conducted over 8 weeks, with sessions 5 days a week at 75-80% VO2 max. The cinnamon and training-cinnamon groups were injected with 200 ml/kg/day of cinnamon extract. Data analysis included serum data, dietary intake, exercise intensity, and metabolic response variables, with blood samples collected 72 hours after the final training session. The dataset was analyzed using one-way ANOVA (P<0.05) and fed into various ML and DL models, including Support Vector Machines (SVM), Random Forest (RF), and Convolutional Neural Networks (CNN). Traditional statistical methods indicated that aerobic training, with and without cinnamon extract, significantly increased serum irisin and decreased leptin levels. Among the algorithms, the CNN model provided superior performance in identifying specific interactions between cinnamon extract concentration and exercise intensity, optimizing the increase in irisin and the decrease in leptin. The CNN model achieved an accuracy of 92%, outperforming the SVM (85%) and RF (88%) models in predicting the optimal conditions for metabolic marker improvements. The study demonstrated that advanced ML and DL techniques could uncover nuanced relationships and potential cellular responses to exercise and dietary supplements, which is not evident through traditional methods. These findings advocate for the integration of advanced analytical techniques in nutritional science and exercise physiology, paving the way for personalized health interventions in managing obesity and diabetes.Keywords: aerobic training, cinnamon extract, insulin resistance, irisin, leptin, convolutional neural networks, exercise physiology, support vector machines, random forest
Procedia PDF Downloads 411609 Investigation of Bubble Growth During Nucleate Boiling Using CFD
Authors: K. Jagannath, Akhilesh Kotian, S. S. Sharma, Achutha Kini U., P. R. Prabhu
Abstract:
Boiling process is characterized by the rapid formation of vapour bubbles at the solid–liquid interface (nucleate boiling) with pre-existing vapour or gas pockets. Computational fluid dynamics (CFD) is an important tool to study bubble dynamics. In the present study, CFD simulation has been carried out to determine the bubble detachment diameter and its terminal velocity. Volume of fluid method is used to model the bubble and the surrounding by solving single set of momentum equations and tracking the volume fraction of each of the fluids throughout the domain. In the simulation, bubble is generated by allowing water-vapour to enter a cylinder filled with liquid water through an inlet at the bottom. After the bubble is fully formed, the bubble detaches from the surface and rises up during which the bubble accelerates due to the net balance between buoyancy force and viscous drag. Finally when these forces exactly balance each other, it attains a constant terminal velocity. The bubble detachment diameter and the terminal velocity of the bubble are captured by the monitor function provided in FLUENT. The detachment diameter and the terminal velocity obtained is compared with the established results based on the shape of the bubble. A good agreement is obtained between the results obtained from simulation and the equations in comparison with the established results.Keywords: bubble growth, computational fluid dynamics, detachment diameter, terminal velocity
Procedia PDF Downloads 3871608 Cognitive Development Theories as Determinant of Children's Brand Recall and Ad Recognition: An Indian Perspective
Authors: Ruchika Sharma
Abstract:
In the past decade, there has been an explosion of research that has examined children’s understanding of TV advertisements and its persuasive intent, socialization of child consumer and child psychology. However, it is evident from the literature review that no studies in this area have covered advertising messages and its impact on children’s brand recall and ad recognition. Copywriters use various creative devices to lure the consumers and very impressionable consumers such as children face far more drastic effects of these creative ways of persuasion. On the basis of Piaget’s theory of cognitive development as a theoretical basis for predicting/understanding children’s response and understanding, a quasi-experiment was carried out for the study, that manipulated measurement timing and advertising messages (familiar vs. unfamiliar) keeping gender and age group as two prominent factors. This study also examines children’s understanding of Advertisements and its elements, predominantly - Language, keeping in view Fishbein’s model. Study revealed significant associations between above mentioned factors and children’s brand recall and ad identification. Further, to test the reliability of the findings on larger sample, bootstrap simulation technique was used. The simulation results are in accordance with the findings of experiment, suggesting that the conclusions obtained from the study can be generalized for entire children’s (as consumers) market in India.Keywords: advertising, brand recall, cognitive development, preferences
Procedia PDF Downloads 2921607 Polymer-Ceramic Composite Film Fabrication and Characterization for Harsh Environment Applications
Authors: Santiranjan Shannigrahi, Mohit Sharma, Ivan Tan Chee Kiang, Yong Anna Marie
Abstract:
Polymer-ceramics composites are gaining importance due to their high specific strength, corrosion resistance, and high mechanical properties, as well as low cost. As a result, polymer composites are suitable for various industrial applications, like automobiles, aerospace, and biomedical areas. The present work comprises the development of polymer-ceramic composite films and is tested for the harsh environment including weatherability and UV barrier property. The polymer composite films are kept in weather chamber for a fixed period of time followed by tested for their physical, mechanical and chemical properties. The composite films are fabricated using compounding followed by hot pressing. UV-visible spectroscopy results reveal that the pure polymer polyethylene (PE) films are transparent in the visible range and do not absorb UV. However, polymer ceramic composite films start absorbing UV completely even at very low filler loading amount of 5 wt.%. The changes in tensile properties of the various composite films before and after UV illuminations for 40 hrs at 60 degC are analyzed. The tensile strength of neat PE film has been observed 8% reduction, whereas the remarkable increase in tensile strength has been observed (18% improvement for 10 wt. % filled composites films). The UV exposure leads to strengthen the crosslinking among PE polymer chains in the filled composite films, which contributes towards the incremented tensile strength properties.Keywords: polymer ceramic composite, processing, harsh environment, mechanical properties
Procedia PDF Downloads 3851606 Attachment Style, Attachment Figure, and Intimate Relationship among Emerging Adults with Anxiety and Depression
Authors: P. K. Raheemudheen, Vibha Sharma, C. B. Tripathi
Abstract:
Background and Aim: Intimate relationships are one of the major sources of unhappiness for emerging adults(18-25 years) and the extent of worry from it is higher for them as compared to older adults. This increases their vulnerability to develop anxiety and depression. Current academic literature have highlighted adult attachment have a crucial role in determining the psycho social adjustment and psychopathology in Emerging Adulthood. In this context, present study is an attempt to explore patterns of adult attachment styles, availability of attachment figures and dimensions of intimate relationship among emerging adults. Method: The participants(n=30) were emerging adults diagnosed with anxiety or/and depression seeking treatment from IHBAS, Delhi. Relationship Style Questionnaire was used to assess the adult attachment styles and Multidimensional Relationship Questionnaire was used to assess dimensions of intimate relationship. Results& Discussion: Results showed that majority of the participants have insecure attachment styles. They perceived their attachment figure as insensitive and unavailable. Further, it was found that participants experience multiple difficulties to establish and maintain healthy intimate relationships. These findings highlight Adult attachment insecurities seem to contribute to anxiety and depression among emerging adults. It proved a conceptual foundation for planning interventions to deal with these attachment based correlate of anxiety and depression which may be more amenable to therapeutic change.Keywords: emerging adult, adult attachment, intimate relationship, anxiety
Procedia PDF Downloads 3071605 Teachers' Design and Implementation of Collaborative Learning Tasks in Higher Education
Authors: Bing Xu, Kerry Lee, Jason M. Stephen
Abstract:
Collaborative learning (CL) has been regarded as a way to facilitate students to gain knowledge and improve social skills. In China, lecturers in higher education institutions have commonly adopted CL in their daily practice. However, such a strategy could not be effective when it is designed and applied in an inappropriate way. Previous research hardly focused on how CL was applied in Chinese universities. This present study aims to gain a deep understanding of how Chinese lecturers design and implement CL tasks. The researchers interviewed ten lecturers from different faculties in various universities in China and usedGroup Learning Activity Instructional Design (GLAID) framework to analyse the data. We found that not all lecturers pay enough attention to eight essential components (proposed by GLAID) when they designed CL tasks, especially the components of Structure and Guidance. Meanwhile, only a small part of lecturers made formative assessment to help students improve learning. We also discuss the strengths and limitations and CL design and further provide suggestions to the lecturers who intend to use CL in class. Research Objectives: The aims of the present research are threefold. We intend to 1) gain a deep understanding of how Chinese lecturers design and implement collaborative learning (CL) tasks, 2) find strengths and limitations of CL design in higher education, and 3) give suggestions about how to improve the design and implement. Research Methods: This research adopted qualitative methods. We applied the semi-structured interview method to interview ten Chinese lecturers about how they designed and implemented CL tasks in their courses. There were 9 questions in the interview protocol focusing on eight components of GLAID. Then, underpinning the GLAID framework, we utilized the coding reliability thematic analysis method to analyse the research data. The coding work was done by two PhD students whose research fields are CL, and the Cohen’s Kappa was 0.772 showing the inter-coder reliability was good. Contribution: Though CL has been commonly adopted in China, few studies have paid attention to the details about how lecturers designed and implemented CL tasks in practice. This research addressed such a gap and found not lecturers were aware of how to design CL and felt it difficult to structure the task and guide the students on collaboration, and further ensure student engagement in CL. In summary, this research advocates for teacher training; otherwise, students may not gain the expected learning outcomes.Keywords: collaborative learning, higher education, task design, GLAID framework
Procedia PDF Downloads 991604 Depth Estimation in DNN Using Stereo Thermal Image Pairs
Authors: Ahmet Faruk Akyuz, Hasan Sakir Bilge
Abstract:
Depth estimation using stereo images is a challenging problem in computer vision. Many different studies have been carried out to solve this problem. With advancing machine learning, tackling this problem is often done with neural network-based solutions. The images used in these studies are mostly in the visible spectrum. However, the need to use the Infrared (IR) spectrum for depth estimation has emerged because it gives better results than visible spectra in some conditions. At this point, we recommend using thermal-thermal (IR) image pairs for depth estimation. In this study, we used two well-known networks (PSMNet, FADNet) with minor modifications to demonstrate the viability of this idea.Keywords: thermal stereo matching, deep neural networks, CNN, Depth estimation
Procedia PDF Downloads 2811603 Transcriptional Profiling of Developing Ovules in Litchi chinensis
Authors: Ashish Kumar Pathak, Ritika Sharma, Vishal Nath, Sudhir Pratap Singh, Rakesh Tuli
Abstract:
Litchi is a sub-tropical fruit crop with genotypes bearing delicious juicy fruits with variable seed size (bold to rudimentary size). Small seed size is a desirable trait in litchi, as it increases consumer acceptance and fruit processing. The biochemical activities in mid- stage ovules (e.g. 16, 20, 24 and 28 days after anthesis) determine the fate of seed and fruit development in litchi. Comprehensive ovule-specific transcriptome analysis was performed in two litchi genotypes with contrasting seed size to gain molecular insight on determinants of seed fates in litchi fruits. The transcriptomic data was de-novo assembled in 1,39,608 trinity transcripts, out of which 6,325 trinity transcripts were differentially expressed between the two contrasting genotypes. Differential transcriptional pattern was found among ovule development stages in contrasting litchi genotypes. The putative genes for salicylic acid, jasmonic acid and brassinosteroid pathway were down-regulated in ovules of small-seeded litchi. Embryogenesis, cell expansion, seed size and stress related trinity transcripts exhibited altered expression in small-seeded genotype. The putative regulators of seed maturation and seed storage were down-regulated in small-seed genotype.Keywords: Litchi, seed, transcriptome, defence
Procedia PDF Downloads 246