Search results for: fast generalized multi-directional Radon transform
3334 MRI R2* of Liver in an Animal Model
Authors: Chiung-Yun Chang, Po-Chou Chen, Jiun-Shiang Tzeng, Ka-Wai Mac, Chia-Chi Hsiao, Jo-Chi Jao
Abstract:
This study aimed to measure R2* relaxation rates in the liver of New Zealand White (NZW) rabbits. R2* relaxation rate has been widely used in various hepatic diseases for iron overload by quantifying iron contents in liver. R2* relaxation rate is defined as the reciprocal of T2* relaxation time and mainly depends on the composition of tissue. Different tissues would have different R2* relaxation rates. The signal intensity decay in Magnetic resonance imaging (MRI) may be characterized by R2* relaxation rates. In this study, a 1.5T GE Signa HDxt whole body MR scanner equipped with an 8-channel high resolution knee coil was used to observe R2* values in NZW rabbit’s liver and muscle. Eight healthy NZW rabbits weighted 2 ~ 2.5 kg were recruited. After anesthesia using Zoletil 50 and Rompun 2% mixture, the abdomen of rabbit was landmarked at the center of knee coil to perform 3-plane localizer scan using fast spoiled gradient echo (FSPGR) pulse sequence. Afterward, multi-planar fast gradient echo (MFGR) scans were performed with 8 various echo times (TEs) (2/4/6/8/10/12/14/16 ms) to acquire images for R2* calculations. Regions of interest (ROIs) at liver and muscle were measured using Advantage workstation. Finally, the R2* was obtained by a linear regression of ln(SI) on TE. The results showed that the longer the echo time, the smaller the signal intensity. The R2* values of liver and muscle were 44.8 10.9 s-1 and 37.4 9.5 s-1, respectively. It implies that the iron concentration of liver is higher than that of muscle. In conclusion, R2* is correlated with iron contents in tissue. The correlations between R2* and iron content in NZW rabbit might be valuable for further exploration.Keywords: liver, magnetic resonance imaging, muscle, R2* relaxation rate
Procedia PDF Downloads 4363333 CO₂ Absorption Studies Using Amine Solvents with Fourier Transform Infrared Analysis
Authors: Avoseh Funmilola, Osman Khalid, Wayne Nelson, Paramespri Naidoo, Deresh Ramjugernath
Abstract:
The increasing global atmospheric temperature is of great concern and this has led to the development of technologies to reduce the emission of greenhouse gases into the atmosphere. Flue gas emissions from fossil fuel combustion are major sources of greenhouse gases. One of the ways to reduce the emission of CO₂ from flue gases is by post combustion capture process and this can be done by absorbing the gas into suitable chemical solvents before emitting the gas into the atmosphere. Alkanolamines are promising solvents for this capture process. Vapour liquid equilibrium of CO₂-alkanolamine systems is often represented by CO₂ loading and partial pressure of CO₂ without considering the liquid phase. The liquid phase of this system is a complex one comprising of 9 species. Online analysis of the process is important to monitor the concentrations of the liquid phase reacting and product species. Liquid phase analysis of CO₂-diethanolamine (DEA) solution was performed by attenuated total reflection Fourier transform infrared (ATR-FTIR) spectroscopy. A robust Calibration was performed for the CO₂-aqueous DEA system prior to an online monitoring experiment. The partial least square regression method was used for the analysis of the calibration spectra obtained. The models obtained were used for prediction of DEA and CO₂ concentrations in the online monitoring experiment. The experiment was performed with a newly built recirculating experimental set up in the laboratory. The set up consist of a 750 ml equilibrium cell and ATR-FTIR liquid flow cell. Measurements were performed at 400°C. The results obtained indicated that the FTIR spectroscopy combined with Partial least square method is an effective tool for online monitoring of speciation.Keywords: ATR-FTIR, CO₂ capture, online analysis, PLS regression
Procedia PDF Downloads 1973332 A Double Ended AC Series Arc Fault Location Algorithm Based on Currents Estimation and a Fault Map Trace Generation
Authors: Edwin Calderon-Mendoza, Patrick Schweitzer, Serge Weber
Abstract:
Series arc faults appear frequently and unpredictably in low voltage distribution systems. Many methods have been developed to detect this type of faults and commercial protection systems such AFCI (arc fault circuit interrupter) have been used successfully in electrical networks to prevent damage and catastrophic incidents like fires. However, these devices do not allow series arc faults to be located on the line in operating mode. This paper presents a location algorithm for series arc fault in a low-voltage indoor power line in an AC 230 V-50Hz home network. The method is validated through simulations using the MATLAB software. The fault location method uses electrical parameters (resistance, inductance, capacitance, and conductance) of a 49 m indoor power line. The mathematical model of a series arc fault is based on the analysis of the V-I characteristics of the arc and consists basically of two antiparallel diodes and DC voltage sources. In a first step, the arc fault model is inserted at some different positions across the line which is modeled using lumped parameters. At both ends of the line, currents and voltages are recorded for each arc fault generation at different distances. In the second step, a fault map trace is created by using signature coefficients obtained from Kirchhoff equations which allow a virtual decoupling of the line’s mutual capacitance. Each signature coefficient obtained from the subtraction of estimated currents is calculated taking into account the Discrete Fast Fourier Transform of currents and voltages and also the fault distance value. These parameters are then substituted into Kirchhoff equations. In a third step, the same procedure described previously to calculate signature coefficients is employed but this time by considering hypothetical fault distances where the fault can appear. In this step the fault distance is unknown. The iterative calculus from Kirchhoff equations considering stepped variations of the fault distance entails the obtaining of a curve with a linear trend. Finally, the fault distance location is estimated at the intersection of two curves obtained in steps 2 and 3. The series arc fault model is validated by comparing current registered from simulation with real recorded currents. The model of the complete circuit is obtained for a 49m line with a resistive load. Also, 11 different arc fault positions are considered for the map trace generation. By carrying out the complete simulation, the performance of the method and the perspectives of the work will be presented.Keywords: indoor power line, fault location, fault map trace, series arc fault
Procedia PDF Downloads 1373331 Analysis of a Generalized Sharma-Tasso-Olver Equation with Variable Coefficients
Authors: Fadi Awawdeh, O. Alsayyed, S. Al-Shará
Abstract:
Considering the inhomogeneities of media, the variable-coefficient Sharma-Tasso-Olver (STO) equation is hereby investigated with the aid of symbolic computation. A newly developed simplified bilinear method is described for the solution of considered equation. Without any constraints on the coefficient functions, multiple kink solutions are obtained. Parametric analysis is carried out in order to analyze the effects of the coefficient functions on the stabilities and propagation characteristics of the solitonic waves.Keywords: Hirota bilinear method, multiple kink solution, Sharma-Tasso-Olver equation, inhomogeneity of media
Procedia PDF Downloads 5173330 Low Temperature PVP Capping Agent Synthesis of ZnO Nanoparticles by a Simple Chemical Precipitation Method and Their Properties
Authors: V. P. Muhamed Shajudheen, K. Viswanathan, K. Anitha Rani, A. Uma Maheswari, S. Saravana Kumar
Abstract:
We are reporting a simple and low-cost chemical precipitation method adopted to prepare zinc oxide nanoparticles (ZnO) using polyvinyl pyrrolidone (PVP) as a capping agent. The Differential Scanning Calorimetry (DSC) and Thermo Gravimetric Analysis (TGA) was applied on the dried gel sample to record the phase transformation temperature of zinc hydroxide Zn(OH)2 to zinc oxide (ZnO) to obtain the annealing temperature of 800C. The thermal, structure, morphology and optical properties have been employed by different techniques such as DSC-TGA, X-Ray Diffraction (XRD), Fourier Transform Infra-Red spectroscopy (FTIR), Micro Raman spectroscopy, UV-Visible absorption spectroscopy (UV-Vis), Photoluminescence spectroscopy (PL) and Field Effect Scanning Electron Microscopy (FESEM). X-ray diffraction results confirmed the wurtzite hexagonal structure of ZnO nanoparticles. The two intensive peaks at 160 and 432 cm-1 in the Raman Spectrum are mainly attributed to the first order modes of the wurtzite ZnO nanoparticles. The energy band gap obtained from the UV-Vis absorption spectra, shows a blue shift, which is attributed to increase in carrier concentration (Burstein Moss Effect). Photoluminescence studies of the single crystalline ZnO nanoparticles, show a strong peak centered at 385 nm, corresponding to the near band edge emission in ultraviolet range. The mixed shape of grapes, sphere, hexagonal and rock like structure has been noticed in FESEM. The results showed that PVP is a suitable capping agent for the preparation of ZnO nanoparticles by simple chemical precipitation method.Keywords: ZnO nanoparticles, simple chemical precipitation route, mixed shape morphology, UV-visible absorption, photoluminescence, Fourier transform infra-Red spectroscopy
Procedia PDF Downloads 4433329 Modelling of the Linear Operator in the Representation of the Function of Wave of a Micro Particle
Authors: Mohammedi Ferhate
Abstract:
This paper deals with the generalized the notion of the function of wave a micro particle moving free, the concept of the linear operator in the representation function delta of Dirac which is a generalization of the symbol of Kronecker to the case of a continuous variation of the sizes concerned with the condition of orthonormation of the Eigen functions the use of linear operators and their Eigen functions in connection with the solution of given differential equations, it is of interest to study the properties of the operators themselves and determine which of them follow purely from the nature of the operators, without reference to specific forms of Eigen functions. The models simulation examples are also presented.Keywords: function, operator, simulation, wave
Procedia PDF Downloads 1463328 High-Temperature Corrosion of Weldment of Fe-2%Mn-0.5%Si Steel in N2/H2O/H2S-Mixed Gas
Authors: Sang Hwan Bak, Min Jung Kim, Dong Bok Lee
Abstract:
Fe-2%Mn-0.5%Si-0.2C steel was welded and corroded at 600, 700 and 800oC for 20 h in 1 atm of N2/H2S/H2O-mixed gas in order to characterize the high-temperature corrosion behavior of the welded joint. Corrosion proceeded fast and almost linearly. It increased with an increase in the corrosion temperature. H2S formed FeS owing to sulfur released from H2S. The scales were fragile and nonadherent.Keywords: Fe-Mn-Si steel, corrosion, welding, sulfidation, H2S gas
Procedia PDF Downloads 4083327 Survival Data with Incomplete Missing Categorical Covariates
Authors: Madaki Umar Yusuf, Mohd Rizam B. Abubakar
Abstract:
The survival censored data with incomplete covariate data is a common occurrence in many studies in which the outcome is survival time. With model when the missing covariates are categorical, a useful technique for obtaining parameter estimates is the EM by the method of weights. The survival outcome for the class of generalized linear model is applied and this method requires the estimation of the parameters of the distribution of the covariates. In this paper, we propose some clinical trials with ve covariates, four of which have some missing values which clearly show that they were fully censored data.Keywords: EM algorithm, incomplete categorical covariates, ignorable missing data, missing at random (MAR), Weibull Distribution
Procedia PDF Downloads 4053326 Degradation of Petroleum Hydrocarbons Using Pseudomonas Aeruginosa Isolated from Oil Contaminated Soil Incorporated into E. coli DH5α Host
Authors: C. S. Jeba Samuel
Abstract:
Soil, especially from oil field has posed a great hazard for terrestrial and marine ecosystems. The traditional treatment of oil contaminated soil cannot degrade the crude oil completely. So far, biodegradation proves to be an efficient method. During biodegradation, crude oil is used as the carbon source and addition of nitrogenous compounds increases the microbial growth, resulting in the effective breakdown of crude oil components to low molecular weight components. The present study was carried out to evaluate the biodegradation of crude oil by hydrocarbon-degrading microorganism Pseudomonas aeruginosa isolated from natural environment like oil contaminated soil. Pseudomonas aeruginosa, an oil degrading microorganism also called as hydrocarbon utilizing microorganism (or “HUM” bug) can utilize crude oil as sole carbon source. In this study, the biodegradation of crude oil was conducted with modified mineral basal salt medium and nitrogen sources so as to increase the degradation. The efficacy of the plasmid from the isolated strain was incorporated into E.coli DH5 α host to speed up the degradation of oil. The usage of molecular techniques has increased oil degradation which was confirmed by the degradation of aromatic and aliphatic rings of hydrocarbons and was inferred by the lesser number of peaks in Fourier Transform Infrared Spectroscopy (FTIR). The gas chromatogram again confirms better degradation by transformed cells by the lesser number of components obtained in the oil treated with transformed cells. This study demonstrated the technical feasibility of using direct inoculation of transformed cells onto the oil contaminated region thereby leading to the achievement of better oil degradation in a shorter time than the degradation caused by the wild strain.Keywords: biodegradation, aromatic rings, plasmid, hydrocarbon, Fourier Transform Infrared Spectroscopy (FTIR)
Procedia PDF Downloads 3723325 The Experimental House: A Case Study to Assess the Long-Term Performance of Waste Tires Used as Replacement for Natural Material in Backfill Applications for Basement Walls in Manitoba
Authors: M. Shokry Rashwan
Abstract:
This study follows a number of experiments conducted at Red River College (RRC) to investigate the short term properties of tire derived aggregate (TDA) produced from shredding off-the-road (OTR) wasted tires in a proposed new application. The application targets replacing natural material used under concrete slabs and as backfills for residential homes’ basement slabs and walls, respectively, with TDA. The experimental work included determining: compressibility, gradation distribution, unit weight, hydraulic conductivity and lateral pressure. Based on the results of those short term properties; it was decided to move forward to study the long-term performance of this otherwise waste material through on-site demonstration. A full-scale basement replicating a typical Manitoba home was therefore built at RRC where both TDA and Natural Materials (NM) were used side-by-side. A large number of sensing and measuring systems are used to compare between the performances of each material when exposed to the typical ground and weather conditions. Parameters monitored and measured include heat losses, moisture migration, drainage ability, lateral pressure, relative movements of slabs and walls, an integrity of ground water and radon emissions. Up-to-date results have confirmed part of the conclusions reached from the earlier laboratory experiments. However, other results have shown that construction practices; such as placing and compaction, may need some adjustments to achieve more desirable outcomes. This presentation provides a review of both short-term tests as well as up-to-date analysis of the on-site demonstration.Keywords: tire derived aggregate (TDA), basement construction, TDA material properties, lateral pressure of TDA, hydraulic conductivity of TDA
Procedia PDF Downloads 2133324 Nullity of t-Tupple Graphs
Authors: Khidir R. Sharaf, Didar A. Ali
Abstract:
The nullity η (G) of a graph is the occurrence of zero as an eigenvalue in its spectra. A zero-sum weighting of a graph G is real valued function, say f from vertices of G to the set of real numbers, provided that for each vertex of G the summation of the weights f (w) over all neighborhood w of v is zero for each v in G.A high zero-sum weighting of G is one that uses maximum number of non-zero independent variables. If G is graph with an end vertex, and if H is an induced sub-graph of G obtained by deleting this vertex together with the vertex adjacent to it, then, η(G)= η(H). In this paper, a high zero-sum weighting technique and the end vertex procedure are applied to evaluate the nullity of t-tupple and generalized t-tupple graphs are derived and determined for some special types of graphs. Also, we introduce and prove some important results about the t-tupple coalescence, Cartesian and Kronecker products of nut graphs.Keywords: graph theory, graph spectra, nullity of graphs, statistic
Procedia PDF Downloads 2393323 Problems Encountered in Teaching English as a Second Language in Asia
Authors: Geraldine Agbor Ojong
Abstract:
This paper conveys some of the problems teachers of ESL face in classroom settings in Thailand. The results of this paper is achieved through close and open ended questionaires administered to a group of English language teachers of three prominent schools in Kaengkhoi, saraburi Province, Thailand.(Saengvithaya school, kaengkhoi school and Pytoon withaya school). Face to face interview of some foreign teachers and students selected randomly And general observation. The data was analysed by frequency distribution and percentage: The result of the study may be generalized so that the conference committee can suggest possible solutions or give contributing ideas on how to handle some of these problems.Keywords: Asian, colonize, ESL, foreign country
Procedia PDF Downloads 4423322 Mean and Volatility Spillover between US Stocks Market and Crude Oil Markets
Authors: Kamel Malik Bensafta, Gervasio Bensafta
Abstract:
The purpose of this paper is to investigate the relationship between oil prices and socks markets. The empirical analysis in this paper is conducted within the context of Multivariate GARCH models, using a transform version of the so-called BEKK parameterization. We show that mean and uncertainty of US market are transmitted to oil market and European market. We also identify an important transmission from WTI prices to Brent Prices.Keywords: oil volatility, stock markets, MGARCH, transmission, structural break
Procedia PDF Downloads 4853321 Investigation of the Morphology of SiO2 Nano-Particles Using Different Synthesis Techniques
Authors: E. Gandomkar, S. Sabbaghi
Abstract:
In this paper, the effects of variation synthesized methods on morphology and size of silica nanostructure via modifying sol-gel and precipitation method have been investigated. Meanwhile, resulting products have been characterized by particle size analyzer, scanning electron microscopy (SEM), X-ray Diffraction (XRD) and Fourier transform infrared (FT-IR) spectra. As result, the shape of SiO2 with sol-gel and precipitation methods was spherical but with modifying sol-gel method we have been had nanolayer structure.Keywords: modified sol-gel, precipitation, nanolayer, Na2SiO3, nanoparticle
Procedia PDF Downloads 2923320 Artificial Neural Networks Application on Nusselt Number and Pressure Drop Prediction in Triangular Corrugated Plate Heat Exchanger
Authors: Hany Elsaid Fawaz Abdallah
Abstract:
This study presents a new artificial neural network(ANN) model to predict the Nusselt Number and pressure drop for the turbulent flow in a triangular corrugated plate heat exchanger for forced air and turbulent water flow. An experimental investigation was performed to create a new dataset for the Nusselt Number and pressure drop values in the following range of dimensionless parameters: The plate corrugation angles (from 0° to 60°), the Reynolds number (from 10000 to 40000), pitch to height ratio (from 1 to 4), and Prandtl number (from 0.7 to 200). Based on the ANN performance graph, the three-layer structure with {12-8-6} hidden neurons has been chosen. The training procedure includes back-propagation with the biases and weight adjustment, the evaluation of the loss function for the training and validation dataset and feed-forward propagation of the input parameters. The linear function was used at the output layer as the activation function, while for the hidden layers, the rectified linear unit activation function was utilized. In order to accelerate the ANN training, the loss function minimization may be achieved by the adaptive moment estimation algorithm (ADAM). The ‘‘MinMax’’ normalization approach was utilized to avoid the increase in the training time due to drastic differences in the loss function gradients with respect to the values of weights. Since the test dataset is not being used for the ANN training, a cross-validation technique is applied to the ANN network using the new data. Such procedure was repeated until loss function convergence was achieved or for 4000 epochs with a batch size of 200 points. The program code was written in Python 3.0 using open-source ANN libraries such as Scikit learn, TensorFlow and Keras libraries. The mean average percent error values of 9.4% for the Nusselt number and 8.2% for pressure drop for the ANN model have been achieved. Therefore, higher accuracy compared to the generalized correlations was achieved. The performance validation of the obtained model was based on a comparison of predicted data with the experimental results yielding excellent accuracy.Keywords: artificial neural networks, corrugated channel, heat transfer enhancement, Nusselt number, pressure drop, generalized correlations
Procedia PDF Downloads 873319 Theta-Phase Gamma-Amplitude Coupling as a Neurophysiological Marker in Neuroleptic-Naive Schizophrenia
Authors: Jun Won Kim
Abstract:
Objective: Theta-phase gamma-amplitude coupling (TGC) was used as a novel evidence-based tool to reflect the dysfunctional cortico-thalamic interaction in patients with schizophrenia. However, to our best knowledge, no studies have reported the diagnostic utility of the TGC in the resting-state electroencephalographic (EEG) of neuroleptic-naive patients with schizophrenia compared to healthy controls. Thus, the purpose of this EEG study was to understand the underlying mechanisms in patients with schizophrenia by comparing the TGC at rest between two groups and to evaluate the diagnostic utility of TGC. Method: The subjects included 90 patients with schizophrenia and 90 healthy controls. All patients were diagnosed with schizophrenia according to the criteria of Diagnostic and Statistical Manual of Mental Disorders, 4th edition (DSM-IV) by two independent psychiatrists using semi-structured clinical interviews. Because patients were either drug-naïve (first episode) or had not been taking psychoactive drugs for one month before the study, we could exclude the influence of medications. Five frequency bands were defined for spectral analyses: delta (1–4 Hz), theta (4–8 Hz), slow alpha (8–10 Hz), fast alpha (10–13.5 Hz), beta (13.5–30 Hz), and gamma (30-80 Hz). The spectral power of the EEG data was calculated with fast Fourier Transformation using the 'spectrogram.m' function of the signal processing toolbox in Matlab. An analysis of covariance (ANCOVA) was performed to compare the TGC results between the groups, which were adjusted using a Bonferroni correction (P < 0.05/19 = 0.0026). Receiver operator characteristic (ROC) analysis was conducted to examine the discriminating ability of the TGC data for schizophrenia diagnosis. Results: The patients with schizophrenia showed a significant increase in the resting-state TGC at all electrodes. The delta, theta, slow alpha, fast alpha, and beta powers showed low accuracies of 62.2%, 58.4%, 56.9%, 60.9%, and 59.0%, respectively, in discriminating the patients with schizophrenia from the healthy controls. The ROC analysis performed on the TGC data generated the most accurate result among the EEG measures, displaying an overall classification accuracy of 92.5%. Conclusion: As TGC includes phase, which contains information about neuronal interactions from the EEG recording, TGC is expected to be useful for understanding the mechanisms the dysfunctional cortico-thalamic interaction in patients with schizophrenia. The resting-state TGC value was increased in the patients with schizophrenia compared to that in the healthy controls and had a higher discriminating ability than the other parameters. These findings may be related to the compensatory hyper-arousal patterns of the dysfunctional default-mode network (DMN) in schizophrenia. Further research exploring the association between TGC and medical or psychiatric conditions that may confound EEG signals will help clarify the potential utility of TGC.Keywords: quantitative electroencephalography (QEEG), theta-phase gamma-amplitude coupling (TGC), schizophrenia, diagnostic utility
Procedia PDF Downloads 1433318 Transformation of Hexagonal Cells into Auxetic in Core Honeycomb Furniture Panels
Authors: Jerzy Smardzewski
Abstract:
Structures with negative Poisson's ratios are called auxetic. They are characterized by better mechanical properties than conventional structures, especially shear strength, the ability to better absorb energy and increase strength during bending, especially in sandwich panels. Commonly used paper cores of cellular boards are made of hexagonal cells. With isotropic facings, these cells provide isotropic properties of the entire furniture board. Shelves made of such panels with a thickness similar to standard chipboards do not provide adequate stiffness and strength of the furniture. However, it is possible to transform the shape of hexagonal cells into polyhedral auxetic cells that improve the mechanical properties of the core. The work aimed to transform the hexagonal cells of the paper core into auxetic cells and determine their basic mechanical properties. Using numerical methods, it was decided to design the most favorable proportions of cells distinguished by the lowest Poisson's ratio and the highest modulus of linear elasticity. Standard cores for cellular boards commonly used to produce 34 mm thick furniture boards were used for the tests. Poisson's ratios, bending strength, and linear elasticity moduli were determined for such cores and boards. Then, the cells were transformed into auxetic structures, and analogous cellular boards were made for which mechanical properties were determined. The results of numerical simulations for which the variable parameters were the dimensions of the cell walls, wall inclination angles, and relative cell density were presented in the further part of the paper. Experimental tests and numerical simulations showed the beneficial effect of auxeticization on the mechanical quality of furniture panels. They allowed for the selection of the optimal shape of auxetic core cells.Keywords: auxetics, honeycomb, panels, simulation, experiment
Procedia PDF Downloads 113317 Role of von Willebrand Factor and ADAMTS13 In The Prediction of Thrombotic Complications In Patients With COVID-19
Authors: Nataliya V. Dolgushina, Elena A. Gorodnova, Olga S. Beznoshenco, Andrey Yu Romanov, Irina V. Menzhinskaya, Lyubov V. Krechetova, Gennady T. Suchich
Abstract:
In patients with COVID-19, generalized hypercoagulability can lead to the development of severe coagulopathy. This event is accompanied by the development of a pronounced inflammatory reaction. The observational prospective study included 39 patients with mild COVID-19 and 102 patients with moderate and severe COVID-19. Patients were then stratified into groups depending on the risk of venous thromboembolism. vWF to ADAMTS-13 concentrations and activity ratios were significantly higher in patients with a high venous thromboembolism risks in patients with moderate and severe forms COVID-19.Keywords: ADAMTS-13, COVID-19, hypercoagulation, thrombosis, von Willebrand factor
Procedia PDF Downloads 893316 Constant Dimension Codes via Generalized Coset Construction
Authors: Kanchan Singh, Sheo Kumar Singh
Abstract:
The fundamental problem of subspace coding is to explore the maximum possible cardinality Aq(n, d, k) of a set of k-dimensional subspaces of an n-dimensional vector space over Fq such that the subspace distance satisfies ds(W1, W2) ≥ d for any two distinct subspaces W1, W2 in this set. In this paper, we construct a new class of constant dimension codes (CDCs) by generalizing the coset construction and combining it with CDCs derived from parallel linkage construction and coset construction with an aim to improve the new lower bounds of Aq(n, d, k). We found a remarkable improvement in some of the lower bounds of Aq(n, d, k).Keywords: constant dimension codes, rank metric codes, coset construction, parallel linkage construction
Procedia PDF Downloads 203315 Radiation Annealing of Radiation Embrittlement of the Reactor Pressure Vessel
Authors: E. A. Krasikov
Abstract:
Influence of neutron irradiation on RPV steel degradation are examined with reference to the possible reasons of the substantial experimental data scatter and furthermore – nonstandard (non-monotonous) and oscillatory embrittlement behavior. In our glance, this phenomenon may be explained by presence of the wavelike component in the embrittlement kinetics. We suppose that the main factor affecting steel anomalous embrittlement is fast neutron intensity (dose rate or flux), flux effect manifestation depends on state-of-the-art fluence level. At low fluencies, radiation degradation has to exceed normative value, then approaches to normative meaning and finally became sub normative. Data on radiation damage change including through the ex-service RPVs taking into account chemical factor, fast neutron fluence and neutron flux were obtained and analyzed. In our opinion, controversy in the estimation on neutron flux on radiation degradation impact may be explained by presence of the wavelike component in the embrittlement kinetics. Therefore, flux effect manifestation depends on fluence level. At low fluencies, radiation degradation has to exceed normative value, then approaches to normative meaning and finally became sub normative. Moreover as a hypothesis we suppose that at some stages of irradiation damaged metal have to be partially restored by irradiation i.e. neutron bombardment. Nascent during irradiation structure undergo occurring once or periodically transformation in a direction both degradation and recovery of the initial properties. According to our hypothesis, at some stage(s) of metal structure degradation neutron bombardment became recovering factor. As a result, oscillation arises that in turn leads to enhanced data scatter.Keywords: annealing, embrittlement, radiation, RPV steel
Procedia PDF Downloads 3413314 Crude Oil and Stocks Markets: Prices and Uncertainty Transmission Analysis
Authors: Kamel Malik Bensafta, Gervasio Semedo
Abstract:
The purpose of this paper is to investigate the relationship between oil prices and socks markets. The empirical analysis in this paper is conducted within the context of Multivariate GARCH models, using a transform version of the so-called BEKK parameterization. We show that mean and uncertainty of US market are transmitted to oil market and European market. We also identify an important transmission from WTI prices to Brent Prices.Keywords: oil volatility, stock markets, MGARCH, transmission, structural break
Procedia PDF Downloads 5243313 Fast Switching Mechanism for Multicasting Failure in OpenFlow Networks
Authors: Alaa Allakany, Koji Okamura
Abstract:
Multicast technology is an efficient and scalable technology for data distribution in order to optimize network resources. However, in the IP network, the responsibility for management of multicast groups is distributed among network routers, which causes some limitations such as delays in processing group events, high bandwidth consumption and redundant tree calculation. Software Defined Networking (SDN) represented by OpenFlow presented as a solution for many problems, in SDN the control plane and data plane are separated by shifting the control and management to a remote centralized controller, and the routers are used as a forwarder only. In this paper we will proposed fast switching mechanism for solving the problem of link failure in multicast tree based on Tabu Search heuristic algorithm and modifying the functions of OpenFlow switch to fasts switch to the pack up sub tree rather than sending to the controller. In this work we will implement multicasting OpenFlow controller, this centralized controller is a core part in our multicasting approach, which is responsible for 1- constructing the multicast tree, 2- handling the multicast group events and multicast state maintenance. And finally modifying OpenFlow switch functions for fasts switch to pack up paths. Forwarders, forward the multicast packet based on multicast routing entries which were generated by the centralized controller. Tabu search will be used as heuristic algorithm for construction near optimum multicast tree and maintain multicast tree to still near optimum in case of join or leave any members from multicast group (group events).Keywords: multicast tree, software define networks, tabu search, OpenFlow
Procedia PDF Downloads 2633312 Improvement of Microscopic Detection of Acid-Fast Bacilli for Tuberculosis by Artificial Intelligence-Assisted Microscopic Platform and Medical Image Recognition System
Authors: Hsiao-Chuan Huang, King-Lung Kuo, Mei-Hsin Lo, Hsiao-Yun Chou, Yusen Lin
Abstract:
The most robust and economical method for laboratory diagnosis of TB is to identify mycobacterial bacilli (AFB) under acid-fast staining despite its disadvantages of low sensitivity and labor-intensive. Though digital pathology becomes popular in medicine, an automated microscopic system for microbiology is still not available. A new AI-assisted automated microscopic system, consisting of a microscopic scanner and recognition program powered by big data and deep learning, may significantly increase the sensitivity of TB smear microscopy. Thus, the objective is to evaluate such an automatic system for the identification of AFB. A total of 5,930 smears was enrolled for this study. An intelligent microscope system (TB-Scan, Wellgen Medical, Taiwan) was used for microscopic image scanning and AFB detection. 272 AFB smears were used for transfer learning to increase the accuracy. Referee medical technicians were used as Gold Standard for result discrepancy. Results showed that, under a total of 1726 AFB smears, the automated system's accuracy, sensitivity and specificity were 95.6% (1,650/1,726), 87.7% (57/65), and 95.9% (1,593/1,661), respectively. Compared to culture, the sensitivity for human technicians was only 33.8% (38/142); however, the automated system can achieve 74.6% (106/142), which is significantly higher than human technicians, and this is the first of such an automated microscope system for TB smear testing in a controlled trial. This automated system could achieve higher TB smear sensitivity and laboratory efficiency and may complement molecular methods (eg. GeneXpert) to reduce the total cost for TB control. Furthermore, such an automated system is capable of remote access by the internet and can be deployed in the area with limited medical resources.Keywords: TB smears, automated microscope, artificial intelligence, medical imaging
Procedia PDF Downloads 2293311 Transforming Data into Knowledge: Mathematical and Statistical Innovations in Data Analytics
Authors: Zahid Ullah, Atlas Khan
Abstract:
The rapid growth of data in various domains has created a pressing need for effective methods to transform this data into meaningful knowledge. In this era of big data, mathematical and statistical innovations play a crucial role in unlocking insights and facilitating informed decision-making in data analytics. This abstract aims to explore the transformative potential of these innovations and their impact on converting raw data into actionable knowledge. Drawing upon a comprehensive review of existing literature, this research investigates the cutting-edge mathematical and statistical techniques that enable the conversion of data into knowledge. By evaluating their underlying principles, strengths, and limitations, we aim to identify the most promising innovations in data analytics. To demonstrate the practical applications of these innovations, real-world datasets will be utilized through case studies or simulations. This empirical approach will showcase how mathematical and statistical innovations can extract patterns, trends, and insights from complex data, enabling evidence-based decision-making across diverse domains. Furthermore, a comparative analysis will be conducted to assess the performance, scalability, interpretability, and adaptability of different innovations. By benchmarking against established techniques, we aim to validate the effectiveness and superiority of the proposed mathematical and statistical innovations in data analytics. Ethical considerations surrounding data analytics, such as privacy, security, bias, and fairness, will be addressed throughout the research. Guidelines and best practices will be developed to ensure the responsible and ethical use of mathematical and statistical innovations in data analytics. The expected contributions of this research include advancements in mathematical and statistical sciences, improved data analysis techniques, enhanced decision-making processes, and practical implications for industries and policymakers. The outcomes will guide the adoption and implementation of mathematical and statistical innovations, empowering stakeholders to transform data into actionable knowledge and drive meaningful outcomes.Keywords: data analytics, mathematical innovations, knowledge extraction, decision-making
Procedia PDF Downloads 753310 The 10,000 Fold Effect Retrograde Neurotransmission: A Newer Concept for Paraplegia’s Physiological Revival by the Use of Intrathecal Sodium Nitroprusside
Authors: V. K. Tewari, M. Hussain, H. K. D. Gupta
Abstract:
B-Methylprednisolone-level-1-benefit (20%) usually given in paraplegia (but within 8hrs). Patients wait-long-duration for physiological-recovery. Intrathecal-Sodium-Nitroprusside(ITSNP) has been used-in vasospasm-due-to-subarachnoid-hemorrhage. ITSNP-has been studied-here for wide-window-period-range for-treatment, fast-recovery/affordability. 2- for acute-cases-and 1-mechanism-for chronic-cases, which-are-interrelated, are being-proposed-for-physiological-recovery. retrograde-neurotransmission, vasospasm and long-term-potentiation-(ltp) mechanisms are proposed here for recovery. It’s a case-control-prospective-study. 82paraplegia-patients(10patients taken as control-no superfusion or dextrose5% superfusion and 72patients as ITSNP-group). The mean time for superfusion was 14.11 days. ITSNP administered at a dosage of 0.2 mg/kg bo wt. Pre/post ITSNP monitored by SSEP/MEP. After-2-Hours in ITSNP-group Mean-Change-From-Baseline-Asia Motor/Sensory-Score 13.84%/13.10%, after-24-hours MOTOR-1.27-points decrease(3.77%) and SENSORY 10.5points-increase(6.22%)as compared to Control-group no-change noted upto 24-hours, At-7days ITSNP motor/sensory;11.56%/6.22% as compared to Control-group 7.60/4.48%, At-2-months in ITSNP 27.69%/6.22% as compared to Control-group 16.02/4.5%. SSEP/MEP-documented-improvements-noted. ITSNP, a-swift-acting-drug in treatment-of-paraplegia, is effective within-two-hours(mean-change-MOTOR-13.84% and SENSORY-13.10%) on-mean14.11th postparaplegia-day with a small-detrimental-response after-24-hours which-recovers-fast.Keywords: paraplegias, intrathecal sodium nitroprusside, retrograde transmission, the 10, 000 fold effect, perforators, vasodilatations, long term potenciations
Procedia PDF Downloads 4093309 Theoretical Investigation on Electronic and Magnetic Properties of Cubic PrMnO3 Perovskite
Authors: B. Bouadjemi, S. Bentata, W. Benstaali, A. Abbad, T. Lantri, A. Zitouni
Abstract:
The purpose of this study was to investigate the structural,electronic and magnetic properties of the cubic praseodymium oxides perovskites PrMnO3. It includes our calculations based on the use of the density functional theory (DFT) with both generalized gradient approximation (GGA) and GGA+U approaches, The spin polarized electronic band structures and densities of states as well as the integer value of the magnetic moment of the unit cell (6 μB) illustrate that PrMnO3 is half-metallic ferromagnetic. The study prove that the compound is half-metallic ferromagnetic however the results obtained, make the cubic PrMnO3 a promising candidate for application in spintronics.Keywords: cubic, DFT, electronic properties, magnetic moment, spintronics
Procedia PDF Downloads 4653308 Calculated Structural and Electronic Properties of Mg and Bi
Authors: G. Patricia Abdel Rahim, Jairo Arbey Rodriguez M, María Guadalupe Moreno Armenta
Abstract:
The present study shows the structural, electronic and magnetic properties of magnesium (Mg) and bismuth (Bi) in a supercell (1X1X5). For both materials were studied in five crystalline structures: rock salt (NaCl), cesium chloride (CsCl), zinc-blende (ZB), wurtzite (WZ), and nickel arsenide (NiAs), using the Density Functional Theory (DFT), the Generalized Gradient Approximation (GGA), and the Full Potential Linear Augmented Plane Wave (FP-LAPW) method. By means of fitting the Murnaghan's state equation we determine the lattice constant, the bulk modulus and it's derived with the pressure. Also we calculated the density of states (DOS) and the band structure.Keywords: bismuth, magnesium, pseudo-potential, supercell
Procedia PDF Downloads 8223307 How Autonomous Vehicles Transform Urban Policies and Cities
Authors: Adrián P. Gómez Mañas
Abstract:
Autonomous vehicles have already transformed urban policies and cities. This is the main assumption of our research, which aims to understand how the representations of the possible arrival of autonomous vehicles already transform priorities or actions in transport and more largely, urban policies. This research is done within the framework of a Ph.D. doctorate directed by Professor Xavier Desjardins at the Sorbonne University of Paris. Our hypotheses are: (i) the perspectives, representations, and imaginaries on autonomous vehicles already affect the stakeholders of urban policies; (ii) the discourses on the opportunities or threats of autonomous vehicles reflect the current strategies of the stakeholders. Each stakeholder tries to integrate a discourse on autonomous vehicles that allows them to change as little as possible their current tactics and strategies. The objective is to eventually make a comparison between three different cases: Paris, United Arab Emirates, and Bogota. We chose those territories because their contexts are very different, but they all have important interests in mobility and innovation, and they all have started to reflect on the subject of self-driving mobility. The main methodology used is to interview actors of the metropolitan area (local officials, leading urban and transport planners, influent experts, and private companies). This work is supplemented with conferences, official documents, press articles, and websites. The objective is to understand: 1) What they know about autonomous vehicles and where does their knowledge come from; 2) What they expect from autonomous vehicles; 3) How their ideas about autonomous vehicles are transforming their action and strategy in managing daily mobility, investing in transport, designing public spaces and urban planning. We are going to present the research and some preliminary results; we will show that autonomous vehicles are often viewed by public authorities as a lever to reach something else. We will also present that speeches are very influenced by local context (political, geographical, economic, etc.), creating an interesting balance between global and local influences. We will analyze the differences and similarities between the three cases and will try to understand which are the causes.Keywords: autonomous vehicles, self-driving mobility, urban planning, urban mobility, transport, public policies
Procedia PDF Downloads 1983306 Foundation Settlement Determination: A Simplified Approach
Authors: Adewoyin O. Olusegun, Emmanuel O. Joshua, Marvel L. Akinyemi
Abstract:
The heterogeneous nature of the subsurface requires the use of factual information to deal with rather than assumptions or generalized equations. Therefore, there is need to determine the actual rate of settlement possible in the soil before structures are built on it. This information will help in determining the type of foundation design and the kind of reinforcement that will be necessary in constructions. This paper presents a simplified and a faster approach for determining foundation settlement in any type of soil using real field data acquired from seismic refraction techniques and cone penetration tests. This approach was also able to determine the depth of settlement of each strata of soil. The results obtained revealed the different settlement time and depth of settlement possible.Keywords: heterogeneous, settlement, foundation, seismic, technique
Procedia PDF Downloads 4453305 Identifying Pathogenic Mycobacterium Species Using Multiple Gene Phylogenetic Analysis
Authors: Lemar Blake, Chris Oura, Ayanna C. N. Phillips Savage
Abstract:
Improved DNA sequencing technology has greatly enhanced bacterial identification, especially for organisms that are difficult to culture. Mycobacteriosis with consistent hyphema, bilateral exophthalmia, open mouth gape and ocular lesions, were observed in various fish populations at the School of Veterinary Medicine, Aquaculture/Aquatic Animal Health Unit. Objective: To identify the species of Mycobacterium that is affecting aquarium fish at the School of Veterinary Medicine, Aquaculture/Aquatic Animal Health Unit. Method: A total of 13 fish samples were collected and analyzed via: Ziehl-Neelsen, conventional polymerase chain reaction (PCR) and real-time PCR. These tests were carried out simultaneously for confirmation. The following combination of conventional primers: 16s rRNA (564 bp), rpoB (396 bp), sod (408 bp) were used. Concatenation of the gene fragments was carried out to phylogenetically classify the organism. Results: Acid fast non-branching bacilli were detected in all samples from homogenized internal organs. All 13 acid fast samples were positive for Mycobacterium via real-time PCR. Partial gene sequences using all three primer sets were obtained from two samples and demonstrated a novel strain. A strain 99% related to Mycobacterium marinum was also confirmed in one sample, using 16srRNA and rpoB genes. The two novel strains were clustered with the rapid growers and strains that are known to affect humans. Conclusions: Phylogenetic analysis demonstrated two novel Mycobacterium strains with the potential of being zoonotic and one strain 99% related to Mycobacterium marinum.Keywords: polymerase chain reaction, phylogenetic, DNA sequencing, zoonotic
Procedia PDF Downloads 143