Search results for: X-ray computed microtomography
507 Effectiveness of Column Geometry in High-Rise Buildings
Authors: Man Singh Meena
Abstract:
Structural engineers are facing different kind of challenges due to innovative & bold ideas of architects who are trying to design every structure with uniqueness. In RCC frame structures different geometry of columns can be used in design and rectangular columns can be placed with different type orientation. The analysis is design of structures can also be carried out by different type of software available i.e., STAAD Pro, ETABS and TEKLA. In recent times high-rise building modeling & analysis is done by ETABS due to its certain features which are superior to other software. The case study in this paper mainly emphasizes on structural behavior of high rise building for different column shape configurations like Circular, Square, Rectangular and Rectangular with 90-degree Rotation and rectangular shape plan. In all these column shapes the areas of columns are kept same to study the effect on design of concrete area is same. Modelling of 20-storeys R.C.C. framed building is done on the ETABS software for analysis. Post analysis of the structure, maximum bending moments, shear forces and maximum longitudinal reinforcement are computed and compared for three different story structures to identify the effectiveness of geometry of column.Keywords: high-rise building, column geometry, building modelling, ETABS analysis, building design, structural analysis, structural optimization
Procedia PDF Downloads 81506 Digital Material Characterization Using the Quantum Fourier Transform
Authors: Felix Givois, Nicolas R. Gauger, Matthias Kabel
Abstract:
The efficient digital material characterization is of great interest to many fields of application. It consists of the following three steps. First, a 3D reconstruction of 2D scans must be performed. Then, the resulting gray-value image of the material sample is enhanced by image processing methods. Finally, partial differential equations (PDE) are solved on the segmented image, and by averaging the resulting solutions fields, effective properties like stiffness or conductivity can be computed. Due to the high resolution of current CT images, the latter is typically performed with matrix-free solvers. Among them, a solver that uses the explicit formula of the Green-Eshelby operator in Fourier space has been proposed by Moulinec and Suquet. Its algorithmic, most complex part is the Fast Fourier Transformation (FFT). In our talk, we will discuss the potential quantum advantage that can be obtained by replacing the FFT with the Quantum Fourier Transformation (QFT). We will especially show that the data transfer for noisy intermediate-scale quantum (NISQ) devices can be improved by using appropriate boundary conditions for the PDE, which also allows using semi-classical versions of the QFT. In the end, we will compare the results of the QFT-based algorithm for simple geometries with the results of the FFT-based homogenization method.Keywords: most likelihood amplitude estimation (MLQAE), numerical homogenization, quantum Fourier transformation (QFT), NISQ devises
Procedia PDF Downloads 78505 A Comparison of the Adsorption Mechanism of Arsenic on Iron-Modified Nanoclays
Authors: Michael Leo L. Dela Cruz, Khryslyn G. Arano, Eden May B. Dela Pena, Leslie Joy Diaz
Abstract:
Arsenic adsorbents were continuously being researched to ease the detrimental impact of arsenic to human health. A comparative study on the adsorption mechanism of arsenic on iron modified nanoclays was undertaken. Iron intercalated montmorillonite (Fe-MMT) and montmorillonite supported zero-valent iron (ZVI-MMT) were the adsorbents investigated in this study. Fe-MMT was produced through ion-exchange by replacing the sodium intercalated ions in montmorillonite with iron (III) ions. The iron (III) in Fe-MMT was later reduced to zero valent iron producing ZVI-MMT. Adsorption study was performed by batch technique. Obtained data were fitted to intra-particle diffusion, pseudo-first order, and pseudo-second-order models and the Elovich equation to determine the kinetics of adsorption. The adsorption of arsenic on Fe-MMT followed the intra-particle diffusion model with intra-particle rate constant of 0.27 mg/g-min0.5. Arsenic was found to be chemically bound on ZVI-MMT as suggested by the pseudo-second order and Elovich equation. The derived pseudo-second order rate constant was 0.0027 g/mg-min with initial adsorption rate computed from the Elovich equation was 113 mg/g-min.Keywords: adsorption mechanism, arsenic, montmorillonite, zero valent iron
Procedia PDF Downloads 415504 A Supramolecular Cocrystal of 2-Amino-4-Chloro-6-Methylpyrimidine with 4-Methylbenzoic Acid: Synthesis, Structural Determinations and Quantum Chemical Investigations
Authors: Nuridayanti Che Khalib, Kaliyaperumal Thanigaimani, Suhana Arshad, Ibrahim Abdul Razak
Abstract:
The 1:1 co-crystal of 2-amino-4-chloro-6-methylpyrimidine (2A4C6MP) with 4-methylbenzoic acid (4MBA) (I) has been prepared by slow evaporation method in methanol, which was crystallized in monoclinic C2/c space group, Z = 8, a = 28.431 (2) Å, b = 7.3098 (5) Å, c = 14.2622 (10) Å, and β = 109.618 (3)°. The presence of unionized –COOH functional group in co-crystal I was identified both by spectral methods (1H and 13C NMR, FTIR) and X-ray diffraction structural analysis. The 2A4C6MP molecule interact with the carboxylic group of the respective 4MBA molecule through N—H⋯O and O—H⋯N hydrogen bonds, forming a cyclic hydrogen –bonded motif R22(8). The crystal structure was stabilized by Npyrimidine-H⋯O=C and C=O-H⋯Npyrimidine types hydrogen bonding interactions. Theoretical investigations have been computed by HF and density function (B3LYP) method with 6-311+G(d,p) basis set. The vibrational frequencies together with 1H and 13C NMR chemical shifts have been calculated on the fully optimized geometry of co-crystal I. Theoretical calculations are in good agreement with the experimental results. Solvent-free formation of this co-crystal I is confirmed by powder X-ray diffraction analysis.Keywords: supramolecular co-crystal, 2-amino-4-chloro-6-methylpyrimidine, Harthree-Fock and DFT studies, spectroscopic analysis
Procedia PDF Downloads 309503 Optimization of Surface Roughness in Turning Process Utilizing Live Tooling via Taguchi Methodology
Authors: Weinian Wang, Joseph C. Chen
Abstract:
The objective of this research is to optimize the process of cutting cylindrical workpieces utilizing live tooling on a HAAS ST-20 lathe. Surface roughness (Ra) has been investigated as the indicator of quality characteristics for machining process. Aluminum alloy was used to conduct experiments due to its wide range usages in engineering structures and components where light weight or corrosion resistance is required. In this study, Taguchi methodology is utilized to determine the effects that each of the parameters has on surface roughness (Ra). A total of 18 experiments of each process were designed according to Taguchi’s L9 orthogonal array (OA) with four control factors at three levels of each and signal-to-noise ratios (S/N) were computed with Smaller the better equation for minimizing the system. The optimal parameters identified for the surface roughness of the turning operation utilizing live tooling were a feed rate of 3 inches/min(A3); a spindle speed of 1300 rpm(B3); a 2-flute titanium nitrite coated 3/8” endmill (C1); and a depth of cut of 0.025 inches (D2). The mean surface roughness of the confirmation runs in turning operation was 8.22 micro inches. The final results demonstrate that Taguchi methodology is a sufficient way of process improvement in turning process on surface roughness.Keywords: CNC milling operation, CNC turning operation, surface roughness, Taguchi parameter design
Procedia PDF Downloads 175502 New Practical and Non-Malleable Elgamal Encryption for E-Voting Protoco
Authors: Karima Djebaili, Lamine Melkemi
Abstract:
Elgamal encryption is a fundamental public-key encryption in cryptography, which is based on the difficulty of discrete logarithm problem and the Diffie-Hellman problem. Supposing the Diffie–Hellman problem is computationally infeasible then Elgamal is secure under a chosen plaintext attack, where security indicates it is difficult for the attacker, given the ciphertext, to restore the whole of the plaintext. However, although it is secure against chosen plaintext attack, Elgamal is absolutely malleable i.e. is not secure against an adaptive chosen ciphertext attack, where the attacker can recover the plaintext. We present a extension on Elgamal encryption which result in non-malleability against adaptive chosen plaintext attack using concatenation and a cryptographic hash function, our evidence utilizes the device of plaintext aware. The algorithm proposed can be used in cryptography voting protocol given its level security. Our protocol protects the confidentiality of voters because each voter encrypts their choice before casting their vote, offers public verifiability using a signing algorithm, the final result is correctly computed using homomorphic property, and works even in the presence of an adversary due to the propriety of non-malleability. Moreover, the protocol prevents some parties colluding to fix the vote results.Keywords: Elgamal encryption, non-malleability, plaintext aware, e-voting
Procedia PDF Downloads 451501 Standard Resource Parameter Based Trust Model in Cloud Computing
Authors: Shyamlal Kumawat
Abstract:
Cloud computing is shifting the approach IT capital are utilized. Cloud computing dynamically delivers convenient, on-demand access to shared pools of software resources, platform and hardware as a service through internet. The cloud computing model—made promising by sophisticated automation, provisioning and virtualization technologies. Users want the ability to access these services including infrastructure resources, how and when they choose. To accommodate this shift in the consumption model technology has to deal with the security, compatibility and trust issues associated with delivering that convenience to application business owners, developers and users. Absent of these issues, trust has attracted extensive attention in Cloud computing as a solution to enhance the security. This paper proposes a trusted computing technology through Standard Resource parameter Based Trust Model in Cloud Computing to select the appropriate cloud service providers. The direct trust of cloud entities is computed on basis of the interaction evidences in past and sustained on its present performances. Various SLA parameters between consumer and provider are considered in trust computation and compliance process. The simulations are performed using CloudSim framework and experimental results show that the proposed model is effective and extensible.Keywords: cloud, Iaas, Saas, Paas
Procedia PDF Downloads 330500 Computational Study of Blood Flow Analysis for Coronary Artery Disease
Authors: Radhe Tado, Ashish B. Deoghare, K. M. Pandey
Abstract:
The aim of this study is to estimate the effect of blood flow through the coronary artery in human heart so as to assess the coronary artery disease.Velocity, wall shear stress (WSS), strain rate and wall pressure distribution are some of the important hemodynamic parameters that are non-invasively assessed with computational fluid dynamics (CFD). These parameters are used to identify the mechanical factors responsible for the plaque progression and/or rupture in left coronary arteries (LCA) in coronary arteries.The initial step for CFD simulations was the construction of a geometrical model of the LCA. Patient specific artery model is constructed using computed tomography (CT) scan data with the help of MIMICS Research 19.0. For CFD analysis ANSYS FLUENT-14.5 is used.Hemodynamic parameters were quantified and flow patterns were visualized both in the absence and presence of coronary plaques. The wall pressure continuously decreased towards distal segments and showed pressure drops in stenotic segments. Areas of high WSS and high flow velocities were found adjacent to plaques deposition.Keywords: angiography, computational fluid dynamics (CFD), time-average wall shear stress (TAWSS), wall pressure, wall shear stress (WSS)
Procedia PDF Downloads 183499 Overnutrition in Adolescents and Its Associated Factors in Dale District Schools in Ethiopia: A Cross-Sectional Study
Authors: Beruk Berhanu Desalegn, Tona Diddana, Alemneh Daba
Abstract:
Objective: The aim of this study was to assess the magnitude and determinants of overnutrition among school going adolescents from Dale District of Ethiopia. Methods: An institution-based cross-sectional study was done between November and December 2020. A total of 333 school going adolescents aged 10-19 years were participated. Socio-demographic, lifestyle, physical activity level, an estimated individual dietary energy intake; and height and weight data were collected. Body Mass Index for age (BAZ) was computed. Results: The magnitude of over-nutrition was 7.2% (10.8% in urban vs. 3.6% in rural). Lack of adequate playing area (AOR=2.53, 95% CI:1.02, 6.26), being an urban resident (AOR=3.05, 95% CI:1.12, 8.29), having more energy intake than expenditure (AOR=9.47, 95% CI:1.58, 56.80), ever consumed fast foods a month before the survey (AOR=2.60, 95% CI:1.93, 6.83), having moderate physical activity (PA) (AOR =9.28, 95% CI: 6.70, 71.63), low PA (AOR=7.95, 95% CI:1.12, 56.72), and having snack between meals (AOR=3.32, 95% CI:1.15, 9.58) were positively associated with over-nutrition. Conclusion: The magnitude of over-nutrition among school going adolescents was lower compared to previous reports in Ethiopia. Sedentary lifestyles, excess calorie intake, not having adequate playing areas in the schools, and having snacks between meals were statistically predicted determinants for over-nutrition in the study area.Keywords: adolescent, over-nutrition, school, Ethiopia
Procedia PDF Downloads 66498 Robustness Analysis of the Carbon and Nitrogen Co-Metabolism Model of Mucor mucedo
Authors: Nahid Banihashemi
Abstract:
An emerging important area of the life sciences is systems biology, which involves understanding the integrated behavior of large numbers of components interacting via non-linear reaction terms. A centrally important problem in this area is an understanding of the co-metabolism of protein and carbohydrate, as it has been clearly demonstrated that the ratio of these metabolites in diet is a major determinant of obesity and related chronic disease. In this regard, we have considered a systems biology model for the co-metabolism of carbon and nitrogen in colonies of the fungus Mucor mucedo. Oscillations are an important diagnostic of underlying dynamical processes of this model. The maintenance of specific patterns of oscillation and its relation to the robustness of this system are the important issues which have been targeted in this paper. In this regard, parametric sensitivity approach as a theoretical approach has been considered for the analysis of the robustness of this model. As a result, the parameters of the model which produce the largest sensitivities have been identified. Furthermore, the largest changes that can be made in each parameter of the model without losing the oscillations in biomass production have been computed. The results are obtained from the implementation of parametric sensitivity analysis in Matlab.Keywords: system biology, parametric sensitivity analysis, robustness, carbon and nitrogen co-metabolism, Mucor mucedo
Procedia PDF Downloads 328497 Methyltrioctylammonium Chloride as a Separation Solvent for Binary Mixtures: Evaluation Based on Experimental Activity Coefficients
Authors: B. Kabane, G. G. Redhi
Abstract:
An ammonium based ionic liquid (methyltrioctylammonium chloride) [N8 8 8 1] [Cl] was investigated as an extraction potential solvent for volatile organic solvents (in this regard, solutes), which includes alkenes, alkanes, ketones, alkynes, aromatic hydrocarbons, tetrahydrofuran (THF), alcohols, thiophene, water and acetonitrile based on the experimental activity coefficients at infinite THF measurements were conducted by the use of gas-liquid chromatography at four different temperatures (313.15 to 343.15) K. Experimental data of activity coefficients obtained across the examined temperatures were used in order to calculate the physicochemical properties at infinite dilution such as partial molar excess enthalpy, Gibbs free energy and entropy term. Capacity and selectivity data for selected petrochemical extraction problems (heptane/thiophene, heptane/benzene, cyclohaxane/cyclohexene, hexane/toluene, hexane/hexene) were computed from activity coefficients data and compared to the literature values with other ionic liquids. Evaluation of activity coefficients at infinite dilution expands the knowledge and provides a good understanding related to the interactions between the ionic liquid and the investigated compounds.Keywords: separation, activity coefficients, methyltrioctylammonium chloride, ionic liquid, capacity
Procedia PDF Downloads 143496 Landmark Based Catch Trends Assessment of Gray Eel Catfish (Plotosus canius) at Mangrove Estuary in Bangladesh
Authors: Ahmad Rabby
Abstract:
The present study emphasizing the catch trends assessment of Gray eel catfish (Plotosus canius) that was scrutinized on the basis of monthly length frequency data collected from mangrove estuary, Bangladesh during January 2017 to December 2018. A total amount of 1298 specimens were collected to estimate the total length (TL) and weight (W) of P. canius ranged from 13.3 cm to 87.4 cm and 28 g to 5200 g, respectively. The length-weight relationship was W=0.006 L2.95 with R2=0.972 for both sexes. The von Bertalanffy growth function parameters were L∞=93.25 cm and K=0.28 yr-1, hypothetical age at zero length of t0=0.059 years and goodness of the fit of Rn=0.494. The growth performances indices for L∞ and W∞ were computed as Φ'=3.386 and Φ=1.84, respectively. The size at first sexual maturity was estimated in TL as 48.8 cm for pool sexes. The natural mortality was 0.51 yr-1 at average annual water surface temperature as 22 0C. The total instantaneous mortality was 1.24 yr-1 at CI95% of 0.105–1.42 (r2=0.986). While fishing mortality was 0.73 yr-1 and the current exploitation ratio as 0.59. The recruitment was continued throughout the year with one major peak during May-June was 17.20-17.96%. The Beverton-Holt yield per recruit model was analyzed by FiSAT-II, when tc was at 1.43 yr, the Fmax was estimated as 0.6 yr-1 and F0.1 was 0.33 yr-1. Current age at the first capture was approximately 0.6 year, however Fcurrent = 0.73 yr-1 which is beyond the F0.1 indicated that the current stock of P. canius of Bangladesh was overexploited.Keywords: Plotosus canius, mangrove estuary, asymptotic length, FiSAT-II
Procedia PDF Downloads 151495 Hawking Radiation of Grumiller Black
Authors: Sherwan Kher Alden Yakub Alsofy
Abstract:
In this paper, we consider the relativistic Hamilton-Jacobi (HJ) equation and study the Hawking radiation (HR) of scalar particles from uncharged Grumiller black hole (GBH) which is affordable for testing in astrophysics. GBH is also known as Rindler modified Schwarzschild BH. Our aim is not only to investigate the effect of the Rindler parameter A on the Hawking temperature (TH ), but to examine whether there is any discrepancy between the computed horizon temperature and the standard TH as well. For this purpose, in addition to its naive coordinate system, we study on the three regular coordinate systems which are Painlev´-Gullstrand (PG), ingoing Eddington- Finkelstein (IEF) and Kruskal-Szekeres (KS) coordinates. In all coordinate systems, we calculate the tunneling probabilities of incoming and outgoing scalar particles from the event horizon by using the HJ equation. It has been shown in detail that the considered HJ method is concluded with the conventional TH in all these coordinate systems without giving rise to the famous factor- 2 problem. Furthermore, in the PG coordinates Parikh-Wilczek’s tunneling (PWT) method is employed in order to show how one can integrate the quantum gravity (QG) corrections to the semiclassical tunneling rate by including the effects of self-gravitation and back reaction. We then show how these corrections yield a modification in the TH.Keywords: ingoing Eddington, Finkelstein, coordinates Parikh-Wilczek’s, Hamilton-Jacobi equation
Procedia PDF Downloads 615494 In-silico Analysis of Plumbagin against Cancer Receptors
Authors: Arpita Roy, Navneeta Bharadvaja
Abstract:
Cancer is an uncontrolled growth of abnormal cells in the body. It is one of the most serious diseases on which extensive research work has been going on all over the world. Structure-based drug designing is a computational approach which helps in the identification of potential leads that can be used for the development of a drug. Plumbagin is a naphthoquinone derivative from Plumbago zeylanica roots and belongs to one of the largest and diverse groups of plant metabolites. Anticancer and antiproliferative activities of plumbagin have been observed in animal models as well as in cell cultures. Plumbagin shows inhibitory effects on multiple cancer-signaling proteins; however, the binding mode and the molecular interactions have not yet been elucidated for most of these protein targets. In this investigation, an attempt to provide structural insights into the binding mode of plumbagin against four cancer receptors using molecular docking was performed. Plumbagin showed minimal energy against targeted cancer receptors, therefore suggested its stability and potential towards different cancers. The least binding energies of plumbagin with COX-2, TACE, and CDK6 are -5.39, -4.93, -and 4.81 kcal/mol, respectively. Comparison studies of plumbagin with different receptors showed that it is a promising compound for cancer treatment. It was also found that plumbagin obeys the Lipinski’s Rule of 5 and computed ADMET properties which showed drug likeliness and improved bioavailability. Since plumbagin is from a natural source, it has reduced side effects, and these results would be useful for cancer treatment.Keywords: cancer, receptor, plumbagin, docking
Procedia PDF Downloads 143493 Atmospheric Oxidation of Carbonyls: Insight to Mechanism, Kinetic and Thermodynamic Parameters
Authors: Olumayede Emmanuel Gbenga, Adeniyi Azeez Adebayo
Abstract:
Carbonyls are the first-generation products from tropospheric degradation reactions of volatile organic compounds (VOCs). This computational study examined the mechanism of removal of carbonyls from the atmosphere via hydroxyl radical. The kinetics of the reactions were computed from the activation energy (using enthalpy (ΔH**) and Gibbs free energy (ΔG**). The minimum energy path (MEP) analysis reveals that in all the molecules, the products have more stable energy than the reactants, which implies that the forward reaction is more thermodynamically favorable. The hydrogen abstraction of the aromatic aldehyde, especially without methyl substituents, is more kinetically favorable compared with the other aldehydes in the order of aromatic (without methyl or meta methyl) > alkene (short chain) > diene > long-chain aldehydes. The activation energy is much lower for the forward reaction than the backward, indicating that the forward reactions are more kinetically stable than their backward reaction. In terms of thermodynamic stability, the aromatic compounds are found to be less favorable in comparison to the aliphatic. The study concludes that the chemistry of the carbonyl bond of the aldehyde changed significantly from the reactants to the products.Keywords: atmospheric carbonyls, oxidation, mechanism, kinetic, thermodynamic
Procedia PDF Downloads 50492 Deep Learning Based on Image Decomposition for Restoration of Intrinsic Representation
Authors: Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Kensuke Nakamura, Dongeun Choi, Byung-Woo Hong
Abstract:
Artefacts are commonly encountered in the imaging process of clinical computed tomography (CT) where the artefact refers to any systematic discrepancy between the reconstructed observation and the true attenuation coefficient of the object. It is known that CT images are inherently more prone to artefacts due to its image formation process where a large number of independent detectors are involved, and they are assumed to yield consistent measurements. There are a number of different artefact types including noise, beam hardening, scatter, pseudo-enhancement, motion, helical, ring, and metal artefacts, which cause serious difficulties in reading images. Thus, it is desired to remove nuisance factors from the degraded image leaving the fundamental intrinsic information that can provide better interpretation of the anatomical and pathological characteristics. However, it is considered as a difficult task due to the high dimensionality and variability of data to be recovered, which naturally motivates the use of machine learning techniques. We propose an image restoration algorithm based on the deep neural network framework where the denoising auto-encoders are stacked building multiple layers. The denoising auto-encoder is a variant of a classical auto-encoder that takes an input data and maps it to a hidden representation through a deterministic mapping using a non-linear activation function. The latent representation is then mapped back into a reconstruction the size of which is the same as the size of the input data. The reconstruction error can be measured by the traditional squared error assuming the residual follows a normal distribution. In addition to the designed loss function, an effective regularization scheme using residual-driven dropout determined based on the gradient at each layer. The optimal weights are computed by the classical stochastic gradient descent algorithm combined with the back-propagation algorithm. In our algorithm, we initially decompose an input image into its intrinsic representation and the nuisance factors including artefacts based on the classical Total Variation problem that can be efficiently optimized by the convex optimization algorithm such as primal-dual method. The intrinsic forms of the input images are provided to the deep denosing auto-encoders with their original forms in the training phase. In the testing phase, a given image is first decomposed into the intrinsic form and then provided to the trained network to obtain its reconstruction. We apply our algorithm to the restoration of the corrupted CT images by the artefacts. It is shown that our algorithm improves the readability and enhances the anatomical and pathological properties of the object. The quantitative evaluation is performed in terms of the PSNR, and the qualitative evaluation provides significant improvement in reading images despite degrading artefacts. The experimental results indicate the potential of our algorithm as a prior solution to the image interpretation tasks in a variety of medical imaging applications. This work was supported by the MISP(Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by the IITP(Institute for Information and Communications Technology Promotion).Keywords: auto-encoder neural network, CT image artefact, deep learning, intrinsic image representation, noise reduction, total variation
Procedia PDF Downloads 190491 Site Specific Ground Response Estimations for the Vulnerability Assessment of the Buildings of the Third Biggest Mosque in the World, Algeria’s Mosque
Authors: S. Mohamadi, T. Boudina, A. Rouabeh, A. Seridi
Abstract:
Equivalent linear and non-linear ground response analyses are conducted at many representative sites at the mosque of Algeria, to compare the free field acceleration spectra with local code of practice. Spectral Analysis of Surface Waves (SASW) technique was adopted to measure the in-situ shear wave velocity profile at the representative sites. The seismic movement imposed on the rock is the NS component of Keddara station recorded during the earthquake in Boumerdes 21 May 2003. The site-specific elastic design spectra for each site are determined to further obtain site specific non-linear acceleration spectra. As a case study, the results of site-specific evaluations are presented for two building sites (site of minaret and site of the prayer hall) to demonstrate the influence of local geological conditions on ground response at Algerian sites. A comparison of computed response with the standard code of practice being used currently in Algeria for the seismic zone of Algiers indicated that the design spectra is not able to capture site amplification due to local geological conditions.Keywords: equivalent linear, non-linear, ground response analysis, design response spectrum
Procedia PDF Downloads 448490 A New 3D Shape Descriptor Based on Multi-Resolution and Multi-Block CS-LBP
Authors: Nihad Karim Chowdhury, Mohammad Sanaullah Chowdhury, Muhammed Jamshed Alam Patwary, Rubel Biswas
Abstract:
In content-based 3D shape retrieval system, achieving high search performance has become an important research problem. A challenging aspect of this problem is to find an effective shape descriptor which can discriminate similar shapes adequately. To address this problem, we propose a new shape descriptor for 3D shape models by combining multi-resolution with multi-block center-symmetric local binary pattern operator. Given an arbitrary 3D shape, we first apply pose normalization, and generate a set of multi-viewed 2D rendered images. Second, we apply Gaussian multi-resolution filter to generate several levels of images from each of 2D rendered image. Then, overlapped sub-images are computed for each image level of a multi-resolution image. Our unique multi-block CS-LBP comes next. It allows the center to be composed of m-by-n rectangular pixels, instead of a single pixel. This process is repeated for all the 2D rendered images, derived from both ‘depth-buffer’ and ‘silhouette’ rendering. Finally, we concatenate all the features vectors into one dimensional histogram as our proposed 3D shape descriptor. Through several experiments, we demonstrate that our proposed 3D shape descriptor outperform the previous methods by using a benchmark dataset.Keywords: 3D shape retrieval, 3D shape descriptor, CS-LBP, overlapped sub-images
Procedia PDF Downloads 445489 Image Segmentation with Deep Learning of Prostate Cancer Bone Metastases on Computed Tomography
Authors: Joseph M. Rich, Vinay A. Duddalwar, Assad A. Oberai
Abstract:
Prostate adenocarcinoma is the most common cancer in males, with osseous metastases as the commonest site of metastatic prostate carcinoma (mPC). Treatment monitoring is based on the evaluation and characterization of lesions on multiple imaging studies, including Computed Tomography (CT). Monitoring of the osseous disease burden, including follow-up of lesions and identification and characterization of new lesions, is a laborious task for radiologists. Deep learning algorithms are increasingly used to perform tasks such as identification and segmentation for osseous metastatic disease and provide accurate information regarding metastatic burden. Here, nnUNet was used to produce a model which can segment CT scan images of prostate adenocarcinoma vertebral bone metastatic lesions. nnUNet is an open-source Python package that adds optimizations to deep learning-based UNet architecture but has not been extensively combined with transfer learning techniques due to the absence of a readily available functionality of this method. The IRB-approved study data set includes imaging studies from patients with mPC who were enrolled in clinical trials at the University of Southern California (USC) Health Science Campus and Los Angeles County (LAC)/USC medical center. Manual segmentation of metastatic lesions was completed by an expert radiologist Dr. Vinay Duddalwar (20+ years in radiology and oncologic imaging), to serve as ground truths for the automated segmentation. Despite nnUNet’s success on some medical segmentation tasks, it only produced an average Dice Similarity Coefficient (DSC) of 0.31 on the USC dataset. DSC results fell in a bimodal distribution, with most scores falling either over 0.66 (reasonably accurate) or at 0 (no lesion detected). Applying more aggressive data augmentation techniques dropped the DSC to 0.15, and reducing the number of epochs reduced the DSC to below 0.1. Datasets have been identified for transfer learning, which involve balancing between size and similarity of the dataset. Identified datasets include the Pancreas data from the Medical Segmentation Decathlon, Pelvic Reference Data, and CT volumes with multiple organ segmentations (CT-ORG). Some of the challenges of producing an accurate model from the USC dataset include small dataset size (115 images), 2D data (as nnUNet generally performs better on 3D data), and the limited amount of public data capturing annotated CT images of bone lesions. Optimizations and improvements will be made by applying transfer learning and generative methods, including incorporating generative adversarial networks and diffusion models in order to augment the dataset. Performance with different libraries, including MONAI and custom architectures with Pytorch, will be compared. In the future, molecular correlations will be tracked with radiologic features for the purpose of multimodal composite biomarker identification. Once validated, these models will be incorporated into evaluation workflows to optimize radiologist evaluation. Our work demonstrates the challenges of applying automated image segmentation to small medical datasets and lays a foundation for techniques to improve performance. As machine learning models become increasingly incorporated into the workflow of radiologists, these findings will help improve the speed and accuracy of vertebral metastatic lesions detection.Keywords: deep learning, image segmentation, medicine, nnUNet, prostate carcinoma, radiomics
Procedia PDF Downloads 96488 Length Dimension Correlates of Longitudinal Physical Conditioning on Indian Male Youth
Authors: Seema Sharma Kaushik, Dhananjoy Shaw
Abstract:
Various length dimensions of the body have been a variable of interest in the research areas of kinanthropometry. However the inclusion of length measurements in various studies remains restricted to reflect characteristics of a particular game/sport at a particular time. Hence, the present investigation was conducted to study various length dimensions correlates of a longitudinal physical conditioning program on Indian male youth. The study was conducted on 90 Indian male youth. The sample was equally divided into three groups namely, progressive load training (PLT), constant load training (CLT) and no load training (NL). The variables included sitting height, leg length, arm length and foot length. The study was conducted by adopting the multi group repeated measure design. Three different groups were measured four times after completion of each of the three meso-cycles of six-weeks duration each. The measurements were taken using the standard landmarks and procedures. Mean, standard deviation and analysis of co-variance were computed to analyze the data statistically. The post-hoc analysis was conducted for the significant F-ratios at 0.05 level. The study concluded that the followed longitudinal physical conditioning program had significant effect on various length dimensions of Indian male youth.Keywords: Indian male youth, longitudinal, length dimensions, physical conditioning
Procedia PDF Downloads 155487 Predictive Value of ¹⁸F-Fdg Accumulation in Visceral Fat Activity to Detect Colorectal Cancer Metastases
Authors: Amil Suleimanov, Aigul Saduakassova, Denis Vinnikov
Abstract:
Objective: To assess functional visceral fat (VAT) activity evaluated by ¹⁸F-fluorodeoxyglucose (¹⁸F-FDG) positron emission tomography/computed tomography (PET/CT) as a predictor of metastases in colorectal cancer (CRC). Materials and methods: We assessed 60 patients with histologically confirmed CRC who underwent 18F-FDG PET/CT after a surgical treatment and courses of chemotherapy. Age, histology, stage, and tumor grade were recorded. Functional VAT activity was measured by maximum standardized uptake value (SUVmax) using ¹⁸F-FDG PET/CT and tested as a predictor of later metastases in eight abdominal locations (RE – Epigastric Region, RLH – Left Hypochondriac Region, RRL – Right Lumbar Region, RU – Umbilical Region, RLL – Left Lumbar Region, RRI – Right Inguinal Region, RP – Hypogastric (Pubic) Region, RLI – Left Inguinal Region) and pelvic cavity (P) in the adjusted regression models. We also report the best areas under the curve (AUC) for SUVmax with the corresponding sensitivity (Se) and specificity (Sp). Results: In both adjusted for age regression models and ROC analysis, 18F-FDG accumulation in RLH (cutoff SUVmax 0.74; Se 75%; Sp 61%; AUC 0.668; p = 0.049), RU (cutoff SUVmax 0.78; Se 69%; Sp 61%; AUC 0.679; p = 0.035), RRL (cutoff SUVmax 1.05; Se 69%; Sp 77%; AUC 0.682; p = 0.032) and RRI (cutoff SUVmax 0.85; Se 63%; Sp 61%; AUC 0.672; p = 0.043) could predict later metastases in CRC patients, as opposed to age, sex, primary tumor location, tumor grade and histology. Conclusions: VAT SUVmax is significantly associated with later metastases in CRC patients and can be used as their predictor.Keywords: ¹⁸F-FDG, PET/CT, colorectal cancer, predictive value
Procedia PDF Downloads 117486 Weeds Density Affects Yield and Quality of Wheat Crop under Different Crop Densities
Authors: Ijaz Ahmad
Abstract:
Weed competition is one of the major biotic constraints in wheat crop productivity. Avena fatua L. and Silybum marianum (L.) Gaertn. are among the worst weeds of wheat, greatly deteriorating wheat quality subsequently reducing its market value. In this connection, two-year experiments were conducted in 2018 & 2019. Different seeding rate wheat viz; 80, 100, 120 and 140 kg ha-1 and different weeds ratio (A. fatua: S. marianum ) sown at the rate 1:8, 2:7, 3:6, 4:5, 5:4, 6:3, 7:2, 8:1 and 0:0 respectively. The weeds ratio and wheat densities are indirectly proportional. However, the wheat seed at the rate of 140 kg ha-1 has minimal weeds interference. Yield losses were 17.5% at weeds density 1:8 while 7.2% at 8:1. However, in wheat density, the highest percent losses were computed on 80 kg ha-1 while the lowest was recorded on 140 kg ha-1. Since due to the large leaf canopy of S. marianum other species can't sustain their growth. Hence, it has been concluded that S. marianum is the hotspot that causes reduction to the yield-related parameters, followed by A. fatua and the other weeds. Due to the morphological mimicry of A. fatua with wheat crop during the vegetative growth stage, it cannot be easily distinguished. Therefore, managing A. fatua and S. marianum before seed setting is recommended for reducing the future weed problem. Based on current studies, it is suggested that sowing wheat seed at the rate of 140 kg ha-1 is recommended to better compete with all the field weeds.Keywords: fat content, holly thistle, protein content, weed competition, wheat, wild oat
Procedia PDF Downloads 207485 Determining the Number of Words Required to Fulfil the Writing Task in an English Proficiency Exam with the Raters’ Scores
Authors: Defne Akinci Midas
Abstract:
The aim of this study was to determine the minimum, and maximum number of words that would be sufficient to fulfill the writing task in the local English Proficiency Exam (EPE) produced and administered at the Middle East Technical University, Ankara, Turkey. The relationship between the number of words and the scores of the written products that had been awarded by two raters in three online EPEs administered in 2020 was examined. The means, standard deviations, percentages, range, minimum and maximum scores as well as correlations of the scores awarded to written products with the words that amount to 0-50, 51-100, 101-150, 151-200, 201-250, 251-300, and so on were computed. The results showed that the raters did not award a full score to texts that had fewer than 100 words. Moreover, the texts that had around 200 words were awarded the highest scores. The highest number of words that earned the highest scores was about 225, and from then onwards, the scores were either stable or lower. A positive low to moderate correlation was found between the number of words and scores awarded to the texts. We understand that the idea of ‘the longer, the better’ did not apply here. The results also showed that words between 101 to about 225 were sufficient to fulfill the writing task to fully display writing skills and language ability in the specific case of this exam.Keywords: English proficiency exam, number of words, scoring, writing task
Procedia PDF Downloads 175484 Analytical Modelling of Surface Roughness during Compacted Graphite Iron Milling Using Ceramic Inserts
Authors: Ş. Karabulut, A. Güllü, A. Güldaş, R. Gürbüz
Abstract:
This study investigates the effects of the lead angle and chip thickness variation on surface roughness during the machining of compacted graphite iron using ceramic cutting tools under dry cutting conditions. Analytical models were developed for predicting the surface roughness values of the specimens after the face milling process. Experimental data was collected and imported to the artificial neural network model. A multilayer perceptron model was used with the back propagation algorithm employing the input parameters of lead angle, cutting speed and feed rate in connection with chip thickness. Furthermore, analysis of variance was employed to determine the effects of the cutting parameters on surface roughness. Artificial neural network and regression analysis were used to predict surface roughness. The values thus predicted were compared with the collected experimental data, and the corresponding percentage error was computed. Analysis results revealed that the lead angle is the dominant factor affecting surface roughness. Experimental results indicated an improvement in the surface roughness value with decreasing lead angle value from 88° to 45°.Keywords: CGI, milling, surface roughness, ANN, regression, modeling, analysis
Procedia PDF Downloads 448483 Evaluation of Expected Annual Loss Probabilities of RC Moment Resisting Frames
Authors: Saemee Jun, Dong-Hyeon Shin, Tae-Sang Ahn, Hyung-Joon Kim
Abstract:
Building loss estimation methodologies which have been advanced considerably in recent decades are usually used to estimate socio and economic impacts resulting from seismic structural damage. In accordance with these methods, this paper presents the evaluation of an annual loss probability of a reinforced concrete moment resisting frame designed according to Korean Building Code. The annual loss probability is defined by (1) a fragility curve obtained from a capacity spectrum method which is similar to a method adopted from HAZUS, and (2) a seismic hazard curve derived from annual frequencies of exceedance per peak ground acceleration. Seismic fragilities are computed to calculate the annual loss probability of a certain structure using functions depending on structural capacity, seismic demand, structural response and the probability of exceeding damage state thresholds. This study carried out a nonlinear static analysis to obtain the capacity of a RC moment resisting frame selected as a prototype building. The analysis results show that the probability of being extensive structural damage in the prototype building is expected to 0.004% in a year.Keywords: expected annual loss, loss estimation, RC structure, fragility analysis
Procedia PDF Downloads 397482 Gender Effects in EEG-Based Functional Brain Networks
Authors: Mahdi Jalili
Abstract:
Functional connectivity in the human brain can be represented as a network using electroencephalography (EEG) signals. Network representation of EEG time series can be an efficient vehicle to understand the underlying mechanisms of brain function. Brain functional networks – whose nodes are brain regions and edges correspond to functional links between them – are characterized by neurobiologically meaningful graph theory metrics. This study investigates the degree to which graph theory metrics are sex dependent. To this end, EEGs from 24 healthy female subjects and 21 healthy male subjects were recorded in eyes-closed resting state conditions. The connectivity matrices were extracted using correlation analysis and were further binarized to obtain binary functional networks. Global and local efficiency measures – as graph theory metrics– were computed for the extracted networks. We found that male brains have a significantly greater global efficiency (i.e., global communicability of the network) across all frequency bands for a wide range of cost values in both hemispheres. Furthermore, for a range of cost values, female brains showed significantly greater right-hemispheric local efficiency (i.e., local connectivity) than male brains.Keywords: EEG, brain, functional networks, network science, graph theory
Procedia PDF Downloads 443481 Effect of Threshold Corrections on Proton Lifetime and Emergence of Topological Defects in Grand Unified Theories
Authors: Rinku Maji, Joydeep Chakrabortty, Stephen F. King
Abstract:
The grand unified theory (GUT) rationales the arbitrariness of the standard model (SM) and explains many enigmas of nature at the outset of a single gauge group. The GUTs predict the proton decay and, the spontaneous symmetry breaking (SSB) of the higher symmetry group may lead to the formation of topological defects, which are indispensable in the context of the cosmological observations. The Super-Kamiokande (Super-K) experiment sets sacrosanct bounds on the partial lifetime (τ) of the proton decay for different channels, e.g., τ(p → e+ π0) > 1.6×10³⁴ years which is the most relevant channel to test the viability of the nonsupersymmetric GUTs. The GUTs based on the gauge groups SO(10) and E(6) are broken to the SM spontaneously through one and two intermediate gauge symmetries with the manifestation of the left-right symmetry at least at a single intermediate stage and the proton lifetime for these breaking chains has been computed. The impact of the threshold corrections, as a consequence of integrating out the heavy fields at the breaking scale alter the running of the gauge couplings, which eventually, are found to keep many GUTs off the Super-K bound. The possible topological defects arising in the course of SSB at different breaking scales for all breaking chains have been studied.Keywords: grand unified theories, proton decay, threshold correction, topological defects
Procedia PDF Downloads 175480 An Assessment of the Hip Muscular Imbalance for Patients with Rheumatism
Authors: Anthony Bawa, Konstantinos Banitsas
Abstract:
Rheumatism is a muscular disorder that affects the muscles of the upper and lower limbs. This condition could potentially progress to impair the movement of patients. This study aims to investigate the hip muscular imbalance in patients with chronic rheumatism. A clinical trial involving a total of 15 participants, made up of 10 patients and 5 control subjects, took place in KATH Hospital between August and September. Participants recruited for the study were of age 54 ± 8years, weight 65± 8kg, and height 176 ± 8cm. Muscle signals were recorded from the rectus femoris, and vastus lateralis on the right and left hip of participants. The parameters used in determining the hip muscular imbalances were the maximum voluntary contraction (MVC%), the mean difference, and hip muscle fatigue levels. The mean signals were compared using a t-test, and the metrics for muscle fatigue assessment were based on the root mean square (RMS), mean absolute value (MAV) and mean frequency (MEF), which were computed between the hip muscles of participants. The results indicated that there were significant imbalances in the muscle coactivity between the right and left hip muscles of patients. The patients’ MVC values were observed to be above 10% when compared with control subjects. Furthermore, the mean difference was seen to be higher with p > 0.002 among patients, which indicated clear differences in the hip muscle contraction activities. The findings indicate significant hip muscular imbalances for patients with rheumatism compared with control subjects. Information about the imbalances among patients will be useful for clinicians in designing therapeutic muscle-strengthening exercises.Keywords: muscular, imbalances, rheumatism, Hip
Procedia PDF Downloads 115479 Orientational Pair Correlation Functions Modelling of the LiCl6H2O by the Hybrid Reverse Monte Carlo: Using an Environment Dependence Interaction Potential
Authors: Mohammed Habchi, Sidi Mohammed Mesli, Rafik Benallal, Mohammed Kotbi
Abstract:
On the basis of four partial correlation functions and some geometric constraints obtained from neutron scattering experiments, a Reverse Monte Carlo (RMC) simulation has been performed in the study of the aqueous electrolyte LiCl6H2O at the glassy state. The obtained 3-dimensional model allows computing pair radial and orientational distribution functions in order to explore the structural features of the system. Unrealistic features appeared in some coordination peaks. To remedy to this, we use the Hybrid Reverse Monte Carlo (HRMC), incorporating an additional energy constraint in addition to the usual constraints derived from experiments. The energy of the system is calculated using an Environment Dependence Interaction Potential (EDIP). Ions effects is studied by comparing correlations between water molecules in the solution and in pure water at room temperature Our results show a good agreement between experimental and computed partial distribution functions (PDFs) as well as a significant improvement in orientational distribution curves.Keywords: LiCl6H2O, glassy state, RMC, HRMC
Procedia PDF Downloads 471478 Data Security and Privacy Challenges in Cloud Computing
Authors: Amir Rashid
Abstract:
Cloud Computing frameworks empower organizations to cut expenses by outsourcing computation resources on-request. As of now, customers of Cloud service providers have no methods for confirming the privacy and ownership of their information and data. To address this issue we propose the platform of a trusted cloud computing program (TCCP). TCCP empowers Infrastructure as a Service (IaaS) suppliers, for example, Amazon EC2 to give a shout box execution condition that ensures secret execution of visitor virtual machines. Also, it permits clients to bear witness to the IaaS supplier and decide if the administration is secure before they dispatch their virtual machines. This paper proposes a Trusted Cloud Computing Platform (TCCP) for guaranteeing the privacy and trustworthiness of computed data that are outsourced to IaaS service providers. The TCCP gives the deliberation of a shut box execution condition for a client's VM, ensuring that no cloud supplier's authorized manager can examine or mess up with its data. Furthermore, before launching the VM, the TCCP permits a client to dependably and remotely acknowledge that the provider at backend is running a confided in TCCP. This capacity extends the verification of whole administration, and hence permits a client to confirm the data operation in secure mode.Keywords: cloud security, IaaS, cloud data privacy and integrity, hybrid cloud
Procedia PDF Downloads 299