Search results for: graph reduction
4579 Normalizing Flow to Augmented Posterior: Conditional Density Estimation with Interpretable Dimension Reduction for High Dimensional Data
Authors: Cheng Zeng, George Michailidis, Hitoshi Iyatomi, Leo L. Duan
Abstract:
The conditional density characterizes the distribution of a response variable y given other predictor x and plays a key role in many statistical tasks, including classification and outlier detection. Although there has been abundant work on the problem of Conditional Density Estimation (CDE) for a low-dimensional response in the presence of a high-dimensional predictor, little work has been done for a high-dimensional response such as images. The promising performance of normalizing flow (NF) neural networks in unconditional density estimation acts as a motivating starting point. In this work, the authors extend NF neural networks when external x is present. Specifically, they use the NF to parameterize a one-to-one transform between a high-dimensional y and a latent z that comprises two components [zₚ, zₙ]. The zₚ component is a low-dimensional subvector obtained from the posterior distribution of an elementary predictive model for x, such as logistic/linear regression. The zₙ component is a high-dimensional independent Gaussian vector, which explains the variations in y not or less related to x. Unlike existing CDE methods, the proposed approach coined Augmented Posterior CDE (AP-CDE) only requires a simple modification of the common normalizing flow framework while significantly improving the interpretation of the latent component since zₚ represents a supervised dimension reduction. In image analytics applications, AP-CDE shows good separation of 𝑥-related variations due to factors such as lighting condition and subject id from the other random variations. Further, the experiments show that an unconditional NF neural network based on an unsupervised model of z, such as a Gaussian mixture, fails to generate interpretable results.Keywords: conditional density estimation, image generation, normalizing flow, supervised dimension reduction
Procedia PDF Downloads 964578 Transition Dynamic Analysis of the Urban Disparity in Iran “Case Study: Iran Provinces Center”
Authors: Marzieh Ahmadi, Ruhullah Alikhan Gorgani
Abstract:
The usual methods of measuring regional inequalities can not reflect the internal changes of the country in terms of their displacement in different development groups, and the indicators of inequalities are not effective in demonstrating the dynamics of the distribution of inequality. For this purpose, this paper examines the dynamics of the urban inertial transport in the country during the period of 2006-2016 using the CIRD multidimensional index and stochastic kernel density method. it firstly selects 25 indicators in five dimensions including macroeconomic conditions, science and innovation, environmental sustainability, human capital and public facilities, and two-stage Principal Component Analysis methodology are developed to create a composite index of inequality. Then, in the second stage, using a nonparametric analytical approach to internal distribution dynamics and a stochastic kernel density method, the convergence hypothesis of the CIRD index of the Iranian provinces center is tested, and then, based on the ergodic density, long-run equilibrium is shown. Also, at this stage, for the purpose of adopting accurate regional policies, the distribution dynamics and process of convergence or divergence of the Iranian provinces for each of the five. According to the results of the first Stage, in 2006 & 2016, the highest level of development is related to Tehran and zahedan is at the lowest level of development. The results show that the central cities of the country are at the highest level of development due to the effects of Tehran's knowledge spillover and the country's lower cities are at the lowest level of development. The main reason for this may be the lack of access to markets in the border provinces. Based on the results of the second stage, which examines the dynamics of regional inequality transmission in the country during 2006-2016, the first year (2006) is not multifaceted and according to the kernel density graph, the CIRD index of about 70% of the cities. The value is between -1.1 and -0.1. The rest of the sequence on the right is distributed at a level higher than -0.1. In the kernel distribution, a convergence process is observed and the graph points to a single peak. Tends to be a small peak at about 3 but the main peak at about-0.6. According to the chart in the final year (2016), the multidimensional pattern remains and there is no mobility in the lower level groups, but at the higher level, the CIRD index accounts for about 45% of the provinces at about -0.4 Take it. That this year clearly faces the twin density pattern, which indicates that the cities tend to be closely related to each other in terms of development, so that the cities are low in terms of development. Also, according to the distribution dynamics results, the provinces of Iran follow the single-density density pattern in 2006 and the double-peak density pattern in 2016 at low and moderate inequality index levels and also in the development index. The country diverges during the years 2006 to 2016.Keywords: Urban Disparity, CIRD Index, Convergence, Distribution Dynamics, Random Kernel Density
Procedia PDF Downloads 1234577 Affect of Reservoir Fluctuations on an Active Landslide in the Xiangjiaba Reservoir Area, Southwest China
Authors: Javed Iqbal
Abstract:
Filling of Xiangjiaba Reservoir Lake in Southwest China triggered and re-activated numerous landslides due to water fluctuation. In order to understand the relationship between reservoirs and slope instability, a typical reservoir landslide (Dasha landslide) at right bank of Jinsha River was selected as a case study for in-depth investigations. The detailed field investigations were carried out in order to identify the landslide with respect to its surroundings and to find out the slip-surface. Boreholes were drilled in order to find out the subsurface lithology and the depth of failure of Dasha landslide. The in-situ geotechnical tests were performed, and the soil samples from exposed slip surface were retrieved for geotechnical laboratory analysis. Finally, stability analysis was done using 3D strength reduction method under different conditions of reservoir water level fluctuations and rainfall conditions. The in-depth investigations show that the Dasha landslide is a bedding rockslide which was once activated in 1986. The topography of Dasha landslide is relatively flat, while the back scarp and local terrain are relatively steep. The landslide area is about 29 × 104 m², and the maximum thickness of the landslide deposits revealed by drilling is about 40 m with the average thickness being about 20 m, and the volume is thus estimated being about 580 × 10⁴ m³. Bedrock in the landslide area is composed of Suining Formation of Jurassic age. The main rock type is silty mudstone with sandstone, and bedding orientation is 300~310° ∠ 7~22°. The factor of safety (FOS) of Dasha landslide obtained by 3D strength reduction cannot meet the minimum safety requirement under the working condition of reservoir level fluctuation as designed, with effect of rainfall and rapid drawdown.Keywords: Dasha landslide, Xiangjiaba reservoir, strength reduction method, bedding rockslide
Procedia PDF Downloads 1614576 Active Vibration Reduction for a Flexible Structure Bonded with Sensor/Actuator Pairs on Efficient Locations Using a Developed Methodology
Authors: Ali H. Daraji, Jack M. Hale, Ye Jianqiao
Abstract:
With the extensive use of high specific strength structures to optimise the loading capacity and material cost in aerospace and most engineering applications, much effort has been expended to develop intelligent structures for active vibration reduction and structural health monitoring. These structures are highly flexible, inherently low internal damping and associated with large vibration and long decay time. The modification of such structures by adding lightweight piezoelectric sensors and actuators at efficient locations integrated with an optimal control scheme is considered an effective solution for structural vibration monitoring and controlling. The size and location of sensor and actuator are important research topics to investigate their effects on the level of vibration detection and reduction and the amount of energy provided by a controller. Several methodologies have been presented to determine the optimal location of a limited number of sensors and actuators for small-scale structures. However, these studies have tackled this problem directly, measuring the fitness function based on eigenvalues and eigenvectors achieved with numerous combinations of sensor/actuator pair locations and converging on an optimal set using heuristic optimisation techniques such as the genetic algorithms. This is computationally expensive for small- and large-scale structures subject to optimise a number of s/a pairs to suppress multiple vibration modes. This paper proposes an efficient method to determine optimal locations for a limited number of sensor/actuator pairs for active vibration reduction of a flexible structure based on finite element method and Hamilton’s principle. The current work takes the simplified approach of modelling a structure with sensors at all locations, subjecting it to an external force to excite the various modes of interest and noting the locations of sensors giving the largest average percentage sensors effectiveness measured by dividing all sensor output voltage over the maximum for each mode. The methodology was implemented for a cantilever plate under external force excitation to find the optimal distribution of six sensor/actuator pairs to suppress the first six modes of vibration. It is shown that the results of the optimal sensor locations give good agreement with published optimal locations, but with very much reduced computational effort and higher effectiveness. Furthermore, it is shown that collocated sensor/actuator pairs placed in these locations give very effective active vibration reduction using optimal linear quadratic control scheme.Keywords: optimisation, plate, sensor effectiveness, vibration control
Procedia PDF Downloads 2304575 Mechanical Properties and Microstructural Analysis of Al6061-Red Mud Composites
Authors: M. Gangadharappa, M. Ravi Kumar, H. N. Reddappa
Abstract:
The mechanical properties and morphological analysis of Al6061-Red mud particulate composites were investigated. The compositions of the composite include a matrix of Al6061 and the red mud particles of 53-75 micron size as reinforcement ranging from 0% to 12% at an interval of 2%. Stir casting technique was used to fabricate Al6061-Red mud composites. Density measurement, estimation of percentage porosity, tensile properties, fracture toughness, hardness value, impact energy, percentage elongation and percentage reduction in area. Further, the microstructures and SEM examinations were investigated to characterize the composites produced. The result shows that a uniform dispersion of the red mud particles along the grain boundaries of the Al6061 alloy. The tensile strength and hardness values increases with the addition of Red mud particles, but there is a slight decrease in the impact energy values, values of percentage elongation and percentage reduction in area as the reinforcement increases. From these results of investigation, we concluded that the red mud, an industrial waste can be used to enhance the properties of Al6061 alloy for engineering applications.Keywords: Al6061, red mud, tensile strength, hardness and microstructures
Procedia PDF Downloads 5584574 The Acute Effects of Higher Versus Lower Load Duration and Intensity on Morphological and Mechanical Properties of the Healthy Achilles Tendon: A Randomized Crossover Trial
Authors: Eman Merza, Stephen Pearson, Glen Lichtwark, Peter Malliaras
Abstract:
The Achilles tendon (AT) exhibits volume changes related to fluid flow under acute load which may be linked to changes in stiffness. Fluid flow provides a mechanical signal for cellular activity and may be one mechanism that facilitates tendon adaptation. This study aimed to investigate whether isometric intervention involving a high level of load duration and intensity could maximize the immediate reduction in AT volume and stiffness compared to interventions involving a lower level of load duration and intensity. Sixteen healthy participants (12 males, 4 females; age= 24.4 ± 9.4 years; body mass= 70.9 ± 16.1 kg; height= 1.7 ± 0.1 m) performed three isometric interventions of varying levels of load duration (2 s and 8 s) and intensity (35% and 75% maximal voluntary isometric contraction) over a 3 week period. Freehand 3D ultrasound was used to measure free AT volume (at rest) and length (at 35%, 55%, and 75% of maximum plantarflexion force) pre- and post-interventions. The slope of the force-elongation curve over these force levels represented individual stiffness (N/mm). Large reductions in free AT volume and stiffness resulted in response to long-duration high-intensity loading whilst less reduction was produced with a lower load intensity. In contrast, no change in free AT volume and a small increase in AT stiffness occurred with lower load duration. These findings suggest that the applied load on the AT must be heavy and sustained for a long duration to maximize immediate volume reduction, which might be an acute response that enables optimal long-term tendon adaptation via mechanotransduction pathways.Keywords: Achilles tendon, volume, stiffness, free tendon, 3d ultrasound
Procedia PDF Downloads 984573 Detectability of Malfunction in Turboprop Engine
Authors: Tomas Vampola, Michael Valášek
Abstract:
On the basis of simulation-generated failure states of structural elements of a turboprop engine suitable for the busy-jet class of aircraft, an algorithm for early prediction of damage or reduction in functionality of structural elements of the engine is designed and verified with real data obtained at dynamometric testing facilities of aircraft engines. Based on an expanding database of experimentally determined data from temperature and pressure sensors during the operation of turboprop engines, this strategy is constantly modified with the aim of using the minimum number of sensors to detect an inadmissible or deteriorated operating mode of specific structural elements of an aircraft engine. The assembled algorithm for the early prediction of reduced functionality of the aircraft engine significantly contributes to the safety of air traffic and to a large extent, contributes to the economy of operation with positive effects on the reduction of the energy demand of operation and the elimination of adverse effects on the environment.Keywords: detectability of malfunction, dynamometric testing, prediction of damage, turboprop engine
Procedia PDF Downloads 934572 Effect of SCN5A Gene Mutation in Endocardial Cell
Authors: Helan Satish, M. Ramasubba Reddy
Abstract:
The simulation of an endocardial cell for gene mutation in the cardiac sodium ion channel NaV1.5, encoded by SCN5A gene, is discussed. The characterization of Brugada Syndrome by loss of function effect on SCN5A mutation due to L812Q mutant present in the DII-S4 transmembrane region of the NaV1.5 channel protein and its effect in an endocardial cell is studied. Ten Tusscher model of human ventricular action potential is modified to incorporate the changes contributed by L812Q mutant in the endocardial cells. Results show that BrS-associated SCN5A mutation causes reduction in the inward sodium current by modifications in the channel gating dynamics such as delayed activation, enhanced inactivation, and slowed recovery from inactivation in the endocardial cell. A decrease in the inward sodium current was also observed, which affects depolarization phase (Phase 0) that leads to reduction in the spike amplitude of the cardiac action potential.Keywords: SCN5A gene mutation, sodium channel, Brugada syndrome, cardiac arrhythmia, action potential
Procedia PDF Downloads 1244571 Hierarchical Piecewise Linear Representation of Time Series Data
Authors: Vineetha Bettaiah, Heggere S. Ranganath
Abstract:
This paper presents a Hierarchical Piecewise Linear Approximation (HPLA) for the representation of time series data in which the time series is treated as a curve in the time-amplitude image space. The curve is partitioned into segments by choosing perceptually important points as break points. Each segment between adjacent break points is recursively partitioned into two segments at the best point or midpoint until the error between the approximating line and the original curve becomes less than a pre-specified threshold. The HPLA representation achieves dimensionality reduction while preserving prominent local features and general shape of time series. The representation permits course-fine processing at different levels of details, allows flexible definition of similarity based on mathematical measures or general time series shape, and supports time series data mining operations including query by content, clustering and classification based on whole or subsequence similarity.Keywords: data mining, dimensionality reduction, piecewise linear representation, time series representation
Procedia PDF Downloads 2734570 A Subband BSS Structure with Reduced Complexity and Fast Convergence
Authors: Salah Al-Din I. Badran, Samad Ahmadi, Ismail Shahin
Abstract:
A blind source separation method is proposed; in this method, we use a non-uniform filter bank and a novel normalisation. This method provides a reduced computational complexity and increased convergence speed comparing to the full-band algorithm. Recently, adaptive sub-band scheme has been recommended to solve two problems: reduction of computational complexity and increase the convergence speed of the adaptive algorithm for correlated input signals. In this work, the reduction in computational complexity is achieved with the use of adaptive filters of orders less than the full-band adaptive filters, which operate at a sampling rate lower than the sampling rate of the input signal. The decomposed signals by analysis bank filter are less correlated in each subband than the input signal at full bandwidth, and can promote better rates of convergence.Keywords: blind source separation, computational complexity, subband, convergence speed, mixture
Procedia PDF Downloads 5774569 An Optimization Model for Maximum Clique Problem Based on Semidefinite Programming
Authors: Derkaoui Orkia, Lehireche Ahmed
Abstract:
The topic of this article is to exploring the potentialities of a powerful optimization technique, namely Semidefinite Programming, for solving NP-hard problems. This approach provides tight relaxations of combinatorial and quadratic problems. In this work, we solve the maximum clique problem using this relaxation. The clique problem is the computational problem of finding cliques in a graph. It is widely acknowledged for its many applications in real-world problems. The numerical results show that it is possible to find a maximum clique in polynomial time, using an algorithm based on semidefinite programming. We implement a primal-dual interior points algorithm to solve this problem based on semidefinite programming. The semidefinite relaxation of this problem can be solved in polynomial time.Keywords: semidefinite programming, maximum clique problem, primal-dual interior point method, relaxation
Procedia PDF Downloads 2194568 Post-Quantum Resistant Edge Authentication in Large Scale Industrial Internet of Things Environments Using Aggregated Local Knowledge and Consistent Triangulation
Authors: C. P. Autry, A. W. Roscoe, Mykhailo Magal
Abstract:
We discuss the theoretical model underlying 2BPA (two-band peer authentication), a practical alternative to conventional authentication of entities and data in IoT. In essence, this involves assembling a virtual map of authentication assets in the network, typically leading to many paths of confirmation between any pair of entities. This map is continuously updated, confirmed, and evaluated. The value of authentication along multiple disjoint paths becomes very clear, and we require analogues of triangulation to extend authentication along extended paths and deliver it along all possible paths. We discover that if an attacker wants to make an honest node falsely believe she has authenticated another, then the length of the authentication paths is of little importance. This is because optimal attack strategies correspond to minimal cuts in the authentication graph and do not contain multiple edges on the same path. The authentication provided by disjoint paths normally is additive (in entropy).Keywords: authentication, edge computing, industrial IoT, post-quantum resistance
Procedia PDF Downloads 1954567 An Experimental Study on the Temperature Reduction of Exhaust Gas at a Snorkeling of Submarine
Authors: Seok-Tae Yoon, Jae-Yeong Choi, Gyu-Mok Jeon, Yong-Jin Cho, Jong-Chun Park
Abstract:
Conventional submarines obtain propulsive force by using an electric propulsion system consisting of a diesel generator, battery, motor, and propeller. In the underwater, the submarine uses the electric power stored in the battery. After that, when a certain amount of electric power is consumed, the submarine floats near the sea water surface and recharges the electric power by using the diesel generator. The voyage carried out while charging the power is called a snorkel, and the high-temperature exhaust gas from the diesel generator forms a heat distribution on the sea water surface. The heat distribution is detected by weapon system equipped with thermo-detector and that is the main cause of reducing the survivability of the submarine. In this paper, an experimental study was carried out to establish optimal operating conditions of a submarine for reduction of infrared signature radiated from the sea water surface. For this, a hot gas generating system and a round acrylic water tank with adjustable water level were made. The control variables of the experiment were set as the mass flow rate, the temperature difference between the water and the hot gas in the water tank, and the water level difference between the air outlet and the water surface. The experimental instrumentation used a thermocouple of T-type to measure the released air temperature on the surface of the water, and a thermography system to measure the thermal energy distribution on the water surface. As a result of the experiment study, we analyzed the correlation between the final released temperature of the exhaust pipe exit in a submarine and the depth of the snorkel, and presented reasonable operating conditions for the infrared signature reduction of submarine.Keywords: experiment study, flow rate, infrared signature, snorkeling, thermography
Procedia PDF Downloads 3504566 Microarray Gene Expression Data Dimensionality Reduction Using PCA
Authors: Fuad M. Alkoot
Abstract:
Different experimental technologies such as microarray sequencing have been proposed to generate high-resolution genetic data, in order to understand the complex dynamic interactions between complex diseases and the biological system components of genes and gene products. However, the generated samples have a very large dimension reaching thousands. Therefore, hindering all attempts to design a classifier system that can identify diseases based on such data. Additionally, the high overlap in the class distributions makes the task more difficult. The data we experiment with is generated for the identification of autism. It includes 142 samples, which is small compared to the large dimension of the data. The classifier systems trained on this data yield very low classification rates that are almost equivalent to a guess. We aim at reducing the data dimension and improve it for classification. Here, we experiment with applying a multistage PCA on the genetic data to reduce its dimensionality. Results show a significant improvement in the classification rates which increases the possibility of building an automated system for autism detection.Keywords: PCA, gene expression, dimensionality reduction, classification, autism
Procedia PDF Downloads 5594565 An Improved Method to Compute Sparse Graphs for Traveling Salesman Problem
Authors: Y. Wang
Abstract:
The Traveling salesman problem (TSP) is NP-hard in combinatorial optimization. The research shows the algorithms for TSP on the sparse graphs have the shorter computation time than those for TSP according to the complete graphs. We present an improved iterative algorithm to compute the sparse graphs for TSP by frequency graphs computed with frequency quadrilaterals. The iterative algorithm is enhanced by adjusting two parameters of the algorithm. The computation time of the algorithm is O(CNmaxn2) where C is the iterations, Nmax is the maximum number of frequency quadrilaterals containing each edge and n is the scale of TSP. The experimental results showed the computed sparse graphs generally have less than 5n edges for most of these Euclidean instances. Moreover, the maximum degree and minimum degree of the vertices in the sparse graphs do not have much difference. Thus, the computation time of the methods to resolve the TSP on these sparse graphs will be greatly reduced.Keywords: frequency quadrilateral, iterative algorithm, sparse graph, traveling salesman problem
Procedia PDF Downloads 2314564 Calcitonin gene-related peptide Receptor Antagonists for Chronic Migraine – Real World Outcomes
Authors: B. J. Mahen, N. E. Lloyd-Gale, S. Johnson, W. P. Rakowicz, M. J. Harris, A. D. Miller
Abstract:
Background: Migraine is a leading cause of disability in the world. Calcitonin gene-related peptide (CGRP) receptor antagonists offer an approach to migraine prophylaxis by inhibiting the inflammatory and vasodilatory effects of CGRP. In recent years, NICE licensed the use of three CGRP-receptor antagonists: Fremanezumab, Galcanezumab, and Erenumab. Here, we present the outcomes of CGRP-antagonist treatment in a cohort of patients who suffer from episodic or chronic migraine and have failed at least three oral prophylactic therapies. Methods: We offered CGRP antagonists to 86 patients who met the NICE criteria to start therapy. We recorded the number of headache days per month (HDPM) at 0 weeks, 3 months, and 12 months. Of those, 26 patients were switched to an alternative treatment due to poor response or side effects. Of the 112 total cases, 9 cases did not sufficiently maintain their headache diary, and 5 cases were not followed up at 3 months. We have therefore included 98 sets of data in our analysis. Results: Fremanezumab achieved a reduction in HDPM by 51.7% at 3 months (p<0.0001), with 63.7% of patients meeting NICE criteria to continue therapy. Patients trialed on Galcanezumab attained a reduction in HDPM by 47.0% (p=0.0019), with 51.6% of patients meeting NICE criteria to continue therapy. Erenumab, however, only achieved a reduction in HDPM by 17.0% (p=0.29), and this was not statistically significant. Furthermore, 34.4%, 9.7%, and 4.9% of patients taking Fremanezumab, Galcanezumab, and Erenumab, respectively, continued therapy beyond 12 months. Of those who attempted drug holidays following 12 months of treatment, migraine symptoms relapsed in 100% of cases. Conclusion: We observed a significant improvement in HDPM amongst episodic and chronic migraine patients following treatment with Fremanezumab or Galcanezumab.Keywords: migraine, CGRP, fremanezumab, galcanezumab, erenumab
Procedia PDF Downloads 934563 Scattering Operator and Spectral Clustering for Ultrasound Images: Application on Deep Venous Thrombi
Authors: Thibaud Berthomier, Ali Mansour, Luc Bressollette, Frédéric Le Roy, Dominique Mottier, Léo Fréchier, Barthélémy Hermenault
Abstract:
Deep Venous Thrombosis (DVT) occurs when a thrombus is formed within a deep vein (most often in the legs). This disease can be deadly if a part or the whole thrombus reaches the lung and causes a Pulmonary Embolism (PE). This disorder, often asymptomatic, has multifactorial causes: immobilization, surgery, pregnancy, age, cancers, and genetic variations. Our project aims to relate the thrombus epidemiology (origins, patient predispositions, PE) to its structure using ultrasound images. Ultrasonography and elastography were collected using Toshiba Aplio 500 at Brest Hospital. This manuscript compares two classification approaches: spectral clustering and scattering operator. The former is based on the graph and matrix theories while the latter cascades wavelet convolutions with nonlinear modulus and averaging operators.Keywords: deep venous thrombosis, ultrasonography, elastography, scattering operator, wavelet, spectral clustering
Procedia PDF Downloads 4774562 The Highly Dispersed WO3-x Photocatalyst over the Confinement Effect of Mesoporous SBA-15 Molecular Sieves for Photocatalytic Nitrogen Reduction
Authors: Xiaoling Ren, Guidong Yang
Abstract:
As one of the largest industrial synthetic chemicals in the world, ammonia has the advantages of high energy density, easy liquefaction, and easy transportation, which is widely used in agriculture, chemical industry, energy storage, and other fields. The industrial Haber-Bosch method process for ammonia synthesis is generally conducted under severe conditions. It is essential to develop a green, sustainable strategy for ammonia production to meet the growing demand. In this direction, photocatalytic nitrogen reduction has huge advantages over the traditional, well-established Haber-Bosch process, such as the utilization of natural sun light as the energy source and significantly lower pressure and temperature to affect the reaction process. However, the high activation energy of nitrogen and the low efficiency of photo-generated electron-hole separation in the photocatalyst result in low ammonia production yield. Many researchers focus on improving the catalyst. In addition to modifying the catalyst, improving the dispersion of the catalyst and making full use of active sites are also means to improve the overall catalytic activity. Few studies have been carried out on this, which is the aim of this work. In this work, by making full use of the nitrogen activation ability of WO3-x with defective sites, small size WO3-x photocatalyst with high dispersibility was constructed, while the growth of WO3-x was restricted by using a high specific surface area mesoporous SBA-15 molecular sieve with the regular pore structure as a template. The morphology of pure SBA-15 and WO3-x/SBA-15 was characterized byscanning electron microscopy (SEM). Compared with pure SBA-15, some small particles can be found in the WO3-x/SBA-15 material, which means that WO3-x grows into small particles under the limitation of SBA-15, which is conducive to the exposure of catalytically active sites. To elucidate the chemical nature of the material, the X-ray diffraction (XRD) analysis was conducted. The observed diffraction pattern inWO3-xis in good agreement with that of the JCPDS file no.71-2450. Compared with WO3-x, no new peaks appeared in WO3-x/SBA-15.It can be concluded that WO3-x/SBA-15 was synthesized successfully. In order to provide more active sites, the mass content of WO3-x was optimized. Then the photocatalytic nitrogen reduction performances of above samples were performed with methanol as a hole scavenger. The results show that the overall ammonia production performance of WO3-x/SBA-15 is improved than pure bulk WO3-x. The above results prove that making full use of active sites is also a means to improve overall catalytic activity.This work provides material basis for the design of high-efficiency photocatalytic nitrogen reduction catalysts.Keywords: ammonia, photocatalytic, nitrogen reduction, WO3-x, high dispersibility
Procedia PDF Downloads 1574561 Effect of Aging on the Second Law Efficiency, Exergy Destruction and Entropy Generation in the Skeletal Muscles during Exercise
Authors: Jale Çatak, Bayram Yılmaz, Mustafa Ozilgen
Abstract:
The second law muscle work efficiency is obtained by multiplying the metabolic and mechanical work efficiencies. Thermodynamic analyses are carried out with 19 sets of arms and legs exercise data which were obtained from the healthy young people. These data are used to simulate the changes occurring during aging. The muscle work efficiency decreases with aging as a result of the reduction of the metabolic energy generation in the mitochondria. The reduction of the mitochondrial energy efficiency makes it difficult to carry out the maintenance of the muscle tissue, which in turn causes a decline of the muscle work efficiency. When the muscle attempts to produce more work, entropy generation and exergy destruction increase. Increasing exergy destruction may be regarded as the result of the deterioration of the muscles. When the exergetic efficiency is 0.42, exergy destruction becomes 1.49 folds of the work performance. This proportionality becomes 2.50 and 5.21 folds when the exergetic efficiency decreases to 0.30 and 0.17 respectively.Keywords: aging mitochondria, entropy generation, exergy destruction, muscle work performance, second law efficiency
Procedia PDF Downloads 4254560 Debris' Effect on Bearing Capacity of Defective Piles in Sand
Authors: A. M. Nasr, W. R. Azzam, K. E. Ebeed
Abstract:
For bored piles, careful cleaning must be used to reduce the amount of material trapped in the drilled hole; otherwise, the debris' presence might cause the soft toe effect, which would affect the axial resistance. There isn't much comprehensive research on bored piles with debris. In order to investigate the behavior of a single pile, a pile composite foundation, a two pile group, a three pile group and a four pile group investigation conducts, forty-eight numerical tests in which the debris is simulated using foam rubber.1m pile diameter and 10m length with spacing 3D and depth of foundation 1m used in this study. It is found that the existence of debris causes a reduction of bearing capacity by 64.58% and 33.23% for single pile and pile composite foundation, respectively, 23.27% and 24.24% for the number of defective piles / total number of pile =1/2 and 1 respectively for two group pile, 10.23%, 19.42% and 28.47% for the number of defective piles / total number of pile =1/3,2/3 and 1 respectively for three group pile and, this reduction increase with the increase in a number of defective piles / a total number of piles and 7.1%, 13.32%,19.02% and 26.36 for the number of defective piles / total number of pile =1/4,2/4,3/4 and 1 respectively for four group pile and decreases with an increase of number of pile duo to interaction effect.Keywords: debris, Foundation, defective, interaction, board pile
Procedia PDF Downloads 944559 'Low Electronic Noise' Detector Technology in Computed Tomography
Authors: A. Ikhlef
Abstract:
Image noise in computed tomography, is mainly caused by the statistical noise, system noise reconstruction algorithm filters. Since last few years, low dose x-ray imaging became more and more desired and looked as a technical differentiating technology among CT manufacturers. In order to achieve this goal, several technologies and techniques are being investigated, including both hardware (integrated electronics and photon counting) and software (artificial intelligence and machine learning) based solutions. From a hardware point of view, electronic noise could indeed be a potential driver for low and ultra-low dose imaging. We demonstrated that the reduction or elimination of this term could lead to a reduction of dose without affecting image quality. Also, in this study, we will show that we can achieve this goal using conventional electronics (low cost and affordable technology), designed carefully and optimized for maximum detective quantum efficiency. We have conducted the tests using large imaging objects such as 30 cm water and 43 cm polyethylene phantoms. We compared the image quality with conventional imaging protocols with radiation as low as 10 mAs (<< 1 mGy). Clinical validation of such results has been performed as well.Keywords: computed tomography, electronic noise, scintillation detector, x-ray detector
Procedia PDF Downloads 1204558 Citation Analysis of New Zealand Court Decisions
Authors: Tobias Milz, L. Macpherson, Varvara Vetrova
Abstract:
The law is a fundamental pillar of human societies as it shapes, controls and governs how humans conduct business, behave and interact with each other. Recent advances in computer-assisted technologies such as NLP, data science and AI are creating opportunities to support the practice, research and study of this pervasive domain. It is therefore not surprising that there has been an increase in investments into supporting technologies for the legal industry (also known as “legal tech” or “law tech”) over the last decade. A sub-discipline of particular appeal is concerned with assisted legal research. Supporting law researchers and practitioners to retrieve information from the vast amount of ever-growing legal documentation is of natural interest to the legal research community. One tool that has been in use for this purpose since the early nineteenth century is legal citation indexing. Among other use cases, they provided an effective means to discover new precedent cases. Nowadays, computer-assisted network analysis tools can allow for new and more efficient ways to reveal the “hidden” information that is conveyed through citation behavior. Unfortunately, access to openly available legal data is still lacking in New Zealand and access to such networks is only commercially available via providers such as LexisNexis. Consequently, there is a need to create, analyze and provide a legal citation network with sufficient data to support legal research tasks. This paper describes the development and analysis of a legal citation Network for New Zealand containing over 300.000 decisions from 125 different courts of all areas of law and jurisdiction. Using python, the authors assembled web crawlers, scrapers and an OCR pipeline to collect and convert court decisions from openly available sources such as NZLII into uniform and machine-readable text. This facilitated the use of regular expressions to identify references to other court decisions from within the decision text. The data was then imported into a graph-based database (Neo4j) with the courts and their respective cases represented as nodes and the extracted citations as links. Furthermore, additional links between courts of connected cases were added to indicate an indirect citation between the courts. Neo4j, as a graph-based database, allows efficient querying and use of network algorithms such as PageRank to reveal the most influential/most cited courts and court decisions over time. This paper shows that the in-degree distribution of the New Zealand legal citation network resembles a power-law distribution, which indicates a possible scale-free behavior of the network. This is in line with findings of the respective citation networks of the U.S. Supreme Court, Austria and Germany. The authors of this paper provide the database as an openly available data source to support further legal research. The decision texts can be exported from the database to be used for NLP-related legal research, while the network can be used for in-depth analysis. For example, users of the database can specify the network algorithms and metrics to only include specific courts to filter the results to the area of law of interest.Keywords: case citation network, citation analysis, network analysis, Neo4j
Procedia PDF Downloads 1064557 Innovations in the Implementation of Preventive Strategies and Measuring Their Effectiveness Towards the Prevention of Harmful Incidents to People with Mental Disabilities who Receive Home and Community Based Services
Authors: Carlos V. Gonzalez
Abstract:
Background: Providers of in-home and community based services strive for the elimination of preventable harm to the people under their care as well as to the employees who support them. Traditional models of safety and protection from harm have assumed that the absence of incidents of harm is a good indicator of safe practices. However, this model creates an illusion of safety that is easily shaken by sudden and inadvertent harmful events. As an alternative, we have developed and implemented an evidence-based resilient model of safety known as C.O.P.E. (Caring, Observing, Predicting and Evaluating). Within this model, safety is not defined by the absence of harmful incidents, but by the presence of continuous monitoring, anticipation, learning, and rapid response to events that may lead to harm. Objective: The objective was to evaluate the effectiveness of the C.O.P.E. model for the reduction of harm to individuals with mental disabilities who receive home and community based services. Methods: Over the course of 2 years we counted the number of incidents of harm and near misses. We trained employees on strategies to eliminate incidents before they fully escalated. We trained employees to track different levels of patient status within a scale from 0 to 10. Additionally, we provided direct support professionals and supervisors with customized smart phone applications to track and notify the team of changes in that status every 30 minutes. Finally, the information that we collected was saved in a private computer network that analyzes and graphs the outcome of each incident. Result and conclusions: The use of the COPE model resulted in: A reduction in incidents of harm. A reduction the use of restraints and other physical interventions. An increase in Direct Support Professional’s ability to detect and respond to health problems. Improvement in employee alertness by decreasing sleeping on duty. Improvement in caring and positive interaction between Direct Support Professionals and the person who is supported. Developing a method to globally measure and assess the effectiveness of prevention from harm plans. Future applications of the COPE model for the reduction of harm to people who receive home and community based services are discussed.Keywords: harm, patients, resilience, safety, mental illness, disability
Procedia PDF Downloads 4474556 An Analysis of Prefabricated Construction Waste: A Case Study Approach
Authors: H. Hakim, C. Kibert, C. Fabre, S. Monadizadeh
Abstract:
Construction industry is an industry saddled with chronic problems of high waste generation. Waste management that is to ensure materials are utilized in an efficient manner would make a major contribution to mitigating the negative environmental impacts of construction waste including finite resources depletion and growing occupied landfill areas to name a few. Furthermore, ‘material resource efficiency’ has been found an economically smart approach specially when considered during the design phase. One effective strategy is to utilizing off-site construction process which includes a series of prefabricated systems such as mobile, modular, and HUD construction (Department of Housing and Urban Development manufactured buildings). These types of buildings are by nature material and resource-efficient. Despite conventional construction that is exposed to adverse weather conditions, manufactured construction production line is capable of creating repetitive units in a factory controlled environment. A factory can have several parallel projects underway with a high speed and in a timely manner which simplifies the storage of excess materials and re-allocating to the next projects. The literature reports that prefabricated construction significantly helps reduce errors, site theft, rework, and delayed problems and can ultimately lead to a considerable waste reduction. However, there is not sufficient data to quantify this reduction when it comes to a regular modular house in the U.S. Therefore, this manuscript aims to provide an analysis of waste originated from a manufactured factory trend. The analysis was made possible with several visits and data collection of Homes of Merits, a Florida Manufactured and Modular Homebuilder. The results quantify and verify a noticeable construction waste reduction.Keywords: construction waste, modular construction, prefabricated buildings, waste management
Procedia PDF Downloads 2664555 Minimally Invasive Open Lumbar Discectomy with Nucleoplasty and Annuloplasty as a Technique for Effective Reduction of Both Axial and Radicular Pain
Authors: Wael Elkholy, Ashraf Sakr, Mahmoud Qandeel, Adam Elkholy
Abstract:
Lumbar disc herniation is a common pathology that may cause significant low back pain and radicular pain that could profoundly impair daily life activities of individuals. Patients who undergo surgical treatment for lumbar disc herniation usually present with radiculopathy along with low back pain (LBP) instead of radiculopathy alone. When discectomy is performed, improvement in leg radiating pain is observed due to spinal nerve irritation. However, long-term LBP due to degenerative changes in the disc may occur postoperatively. In addition, limited research has been reported on the short-term (within 1 year) improvement in LBP after discectomy. In this study we would like to share our minimally invasive open technique for lumbar discectomy with annuloplasty and nuceloplasty as a technique for effective reduction of both axial and radicular pain.Keywords: nucleoplasty, sinuvertebral nerve cauterization, annuloplasty, discogenic low back pain, axial pain, radicular pain, minimally invasive lumbar discectomy
Procedia PDF Downloads 664554 Identifying Strategies for Improving Railway Services in Bangladesh
Authors: Armana Sabiha Huq, Tahmina Rahman Chowdhury
Abstract:
In this paper, based on the stated preference experiment, the service quality of Bangladesh Railway has been assessed, and particular importance has been given to investigate if there exists a relationship between service quality and safety. For investigation purposes, environmental and organizational factors were assumed to determine the safety performance of the railway. Data collected from the survey has been analyzed by importance-performance analysis (IPA). In this paper, a modification of the well-known importance-performance analysis (IPA) has been done by adopting the importance of the weights determined through a structural equation modeling (SEM) approach and by plotting the gap between importance and performance on a visual graph. It has been found that there exists a relationship between safety and serviceability to some extent. Limited resources are an important factor to improve the safety and serviceability condition of the BD railway. Moreover, it is observed that the limited resources available to monitor and improve the safety performance of railway.Keywords: importance-performance analysis, GAP-IPA, SEM, serviceability, safety, factor analysis
Procedia PDF Downloads 1384553 Antimicrobial Functions of Some Spice Extracts Such as Sumac, Cumin, Black Pepper and Red Pepper on the Growth of Common Food-Borne Pathogens and Their Biogenic Amine Formation
Authors: Fatih Özogul, Esmeray Kuley Boga, Ferhat Kuley, Yesim Özogul
Abstract:
The impact of diethyl ether extract of spices (sumac, cumin, black pepper and red pepper) on growth of Staphylococcus aureus, Salmonella Paratyphi A, Klebsiella pneumoniae, Enterococcus faecalis, Camplylobacter jejuni, Aeromonas hydrophila, Pseudomonas aeruginosa and Yersinia enterocolitica and their biogenic amine production were investigated in tyrosine decarboxylase broth. Sumac extract generally had the highest activity to inhibit bacterial growth compared to other extracts, although antimicrobial effect of extracts used varied depending on bacterial strains. Sumac extract resulted in 3.34 and 2.54 log reduction for Y. enterocolitica and Camp. jejuni growth, whilst red pepper extract induced 0.65, 0.41 and 0.34 log reduction for growth of Y. enterocolitica, S. Paratyphi A and Staph. aureus, respectively. Spice extracts significantly inhibited ammonia production by bacteria (P < 0.05). Eleven and nine fold reduction on ammonia production by S. Paratyphi A and Staph. aureus were observed in the presence of sumac extract. Dopamine, agmatine, tyramine, serotonin and TMA were main amines produced by bacteria. Tyramine production by food-borne-pathogens was more than 10 mg/L, whereas histamine accumulated below 52 mg/L. The effect of spice extracts on biogenic amine production varied depending on amino acid decarboxylase broth, spice type, bacterial strains and specific amine, although cumin extract generally increased biogenic amine production by bacteria.Keywords: antimicrobials, biogenic amines, food-borne pathogens, spice extracts
Procedia PDF Downloads 3114552 A Comparison of Image Data Representations for Local Stereo Matching
Authors: André Smith, Amr Abdel-Dayem
Abstract:
The stereo matching problem, while having been present for several decades, continues to be an active area of research. The goal of this research is to find correspondences between elements found in a set of stereoscopic images. With these pairings, it is possible to infer the distance of objects within a scene, relative to the observer. Advancements in this field have led to experimentations with various techniques, from graph-cut energy minimization to artificial neural networks. At the basis of these techniques is a cost function, which is used to evaluate the likelihood of a particular match between points in each image. While at its core, the cost is based on comparing the image pixel data; there is a general lack of consistency as to what image data representation to use. This paper presents an experimental analysis to compare the effectiveness of more common image data representations. The goal is to determine the effectiveness of these data representations to reduce the cost for the correct correspondence relative to other possible matches.Keywords: colour data, local stereo matching, stereo correspondence, disparity map
Procedia PDF Downloads 3684551 Engineering Method to Measure the Impact Sound Improvement with Floor Coverings
Authors: Katarzyna Baruch, Agata Szelag, Jaroslaw Rubacha, Bartlomiej Chojnacki, Tadeusz Kamisinski
Abstract:
Methodology used to measure the reduction of transmitted impact sound by floor coverings situated on a massive floor is described in ISO 10140-3: 2010. To carry out such tests, the standardised reverberation room separated by a standard floor from the second measuring room are required. The need to have a special laboratory results in high cost and low accessibility of this measurement. The authors propose their own engineering method to measure the impact sound improvement with floor coverings. This method does not require standard rooms and floor. This paper describes the measurement procedure of proposed engineering method. Further, verification tests were performed. Validation of the proposed method was based on the analytical model, Statistical Energy Analysis (SEA) model and empirical measurements. The received results were related to corresponding ones obtained from ISO 10140-3:2010 measurements. The study confirmed the usefulness of the engineering method.Keywords: building acoustic, impact noise, impact sound insulation, impact sound transmission, reduction of impact sound
Procedia PDF Downloads 3224550 Effects of Concomitant Use of Metformin and Powdered Moringa Oleifera Leaves on Glucose Tolerance in Sprague-Dawley Rats
Authors: Emielex M. Aguilar, Kristen Angela G. Cruz, Czarina Joie L. Rivera, Francis Dave C. Tan, Gavino Ivan N. Tanodra, Dianne Katrina G. Usana, Mary Grace T. Valentin, Nico Albert S. Vasquez, Edwin Monico C. Wee
Abstract:
The risk of diabetes mellitus is increasing in the Philippines, with Metformin and Insulin as drugs commonly used for its management. The use of herbal medicines has grown increasingly, especially among the elderly population. Moringa oleifera or malunggay is one of the most common plants in the country, and several studies have shown the plant to exhibit a hypoglycemic property with its flavonoid content. This study aims to investigate the possible effects of concomitant use of Metformin and powdered M. oleifera leaves (PMOL) on blood glucose levels. Twenty male Sprague-Dawley rats were equally distributed into four groups. Fasting blood glucose levels of the rats were measured prior to experimentation. The following treatments were administered to the four groups, respectively: glucose only 2 g/kg; glucose 2 g/kg + Metformin 100 mg/kg; glucose 2 g/kg + PMOL 200 mg/kg; and glucose 2 g/kg + PMOL 200 mg/kg and Metformin 100 mg/kg. Blood glucose levels were determined on the 1st, 2nd, 3rd, and 4th hour post-treatment and compared between groups. Statistical analysis showed that the type of intervention did not show significance in the reduction of blood glucose levels when compared with the other groups (p=0.378), while the effect of time exhibited significance (p=0.000). The interaction between the type of intervention and time of blood glucose measurement was shown to be significant (p=0.024). Within each group, the control and PMOL-treated groups showed significant reduction in blood glucose levels over time with p-values of 0.000 and 0.000, respectively, while the Metformin-treated and the combination groups had p-values of 0.062 and 0.093, respectively, which are not significant. The descriptive data also showed that the mean total reduction of blood glucose levels of the Metformin and PMOL combination treatment group was lower than the PMOL-treated group alone, while the mean total reduction of blood glucose levels of the combination group was higher than the Metformin-treated group alone. Based on the results obtained, the combination of Metformin and PMOL did not significantly lower the blood glucose levels of the rats as compared to the other groups. However, the concomitant use of Metformin and PMOL may affect each other’s blood glucose lowering activity. Additionally, prolonged time of exposure and delay in the first blood glucose measurement after treatment could exhibit a significant effect in the blood glucose levels. Further studies are recommended regarding the effects of the concomitant use of the two agents on blood glucose levels.Keywords: blood glucose levels, concomitant use, metformin, Moringa oleifera
Procedia PDF Downloads 412