Search results for: feature expanding.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2111

Search results for: feature expanding.

851 Functional Characterization of Transcriptional Regulator WhiB Proteins of Mycobacterium Tuberculosis

Authors: Sonam Kumari

Abstract:

Mycobacterium tuberculosis (Mtb), the causative agent of tuberculosis, possesses a remarkable feature of entering into and emerging from a persistent state. The mechanism by which Mtb switches from the dormant state to the replicative form is still poorly characterized. Proteome studies have given us an insight into the role of certain proteins in giving stupendous virulence to Mtb, but numerous dotsremain unconnected and unaccounted. The WhiB family of proteins is one such protein that is associated with developmental processes in actinomycetes.Mtb has seven such proteins (WhiB1 to WhiB7).WhiB proteins are transcriptional regulators; their conserved C-terminal HTH motif is involved in DNA binding. They regulate various essential genes of Mtbby binding to their promoter DNA. Biophysical Analysis of the effect of DNA binding on WhiB proteins has not yet been appropriately characterized. Interaction with DNA induces conformational changes in the WhiB proteins, confirmed by steady-state fluorescence and circular dichroism spectroscopy. ITC has deduced thermodynamic parameters and the binding affinity of the interaction. Since these transcription factors are highly unstable in vitro, their stability and solubility were enhanced by the co-expression of molecular chaperones. The present study findings help determine the conditions under which the WhiB proteins interact with their interacting partner and the factors that influence their binding affinity. This is crucial in understanding their role in regulating gene expression in Mtbandin targeting WhiB proteins as a drug target to cure TB.

Keywords: tuberculosis, WhiB proteins, mycobacterium tuberculosis, nucleic acid binding

Procedia PDF Downloads 104
850 The Effect of Stent Coating on the Stent Flexibility: Comparison of Covered Stent and Bare Metal Stent

Authors: Keping Zuo, Foad Kabinejadian, Gideon Praveen Kumar Vijayakumar, Fangsen Cui, Pei Ho, Hwa Liang Leo

Abstract:

Carotid artery stenting (CAS) is the standard procedure for patients with severe carotid stenosis at high risk for carotid endarterectomy (CAE). A major drawback of CAS is the higher incidence of procedure-related stroke compared with traditional open surgical treatment for carotid stenosis - CEA, even with the use of the embolic protection devices (EPD). As the currently available bare metal stents cannot address this problem, our research group developed a novel preferential covered-stent for carotid artery aims to prevent friable fragments of atherosclerotic plaques from flowing into the cerebral circulation, and yet maintaining the flow of the external carotid artery. The preliminary animal studies have demonstrated the potential of this novel covered-stent design for the treatment of carotid atherosclerotic stenosis. The purpose of this study is to evaluate the effect of membrane coating on the stent flexibility in order to improve the clinical performance of our novel covered stents. A total of 21 stents were evaluated in this study: 15 self expanding bare nitinol stents and 6 PTFE-covered stents. 10 of the bare stents were coated with 11%, 16% and 22% Polyurethane(PU), 4%, 6.25% and 11% EE, as well as 22% PU plus 5 μm Parylene. Different laser cutting designs were performed on 4 of the PTFE covert stents. All the stents, with or without the covered membrane, were subjected to a three-point flexural test. The stents were placed on two supports that are 30 mm apart, and the actuator is applying a force in the exact middle of the two supports with a loading pin with radius 2.5 mm. The loading pin displacement change, the force and the variation in stent shape were recorded for analysis. The flexibility of the stents was evaluated by the lumen area preservation at three displacement bending levels: 5mm, 7mm, and 10mm. All the lumen areas in all stents decreased with the increase of the displacement from 0 to 10 mm. The bare stents were able to maintain 0.864 ± 0.015, 0.740 ± 0.025 and 0.597 ± 0.031of original lumen area at 5 mm, 7 mm and 10mm displacement respectively. For covered stents, the stents with EE coating membrane showed the best lumen area preservation (0.839 ± 0.005, 0.7334 ± 0.043 and 0.559 ± 0.014), whereas, the stents with PU and Parylene coating were only 0.662, 0.439 and 0.305. Bending stiffness was also calculated and compared. These results provided optimal material information and it was crucial for enhancing clinical performance of our novel covered stents.

Keywords: carotid artery, covered stent, nonlinear, hyperelastic, stress, strain

Procedia PDF Downloads 295
849 An Adaptive Dimensionality Reduction Approach for Hyperspectral Imagery Semantic Interpretation

Authors: Akrem Sellami, Imed Riadh Farah, Basel Solaiman

Abstract:

With the development of HyperSpectral Imagery (HSI) technology, the spectral resolution of HSI became denser, which resulted in large number of spectral bands, high correlation between neighboring, and high data redundancy. However, the semantic interpretation is a challenging task for HSI analysis due to the high dimensionality and the high correlation of the different spectral bands. In fact, this work presents a dimensionality reduction approach that allows to overcome the different issues improving the semantic interpretation of HSI. Therefore, in order to preserve the spatial information, the Tensor Locality Preserving Projection (TLPP) has been applied to transform the original HSI. In the second step, knowledge has been extracted based on the adjacency graph to describe the different pixels. Based on the transformation matrix using TLPP, a weighted matrix has been constructed to rank the different spectral bands based on their contribution score. Thus, the relevant bands have been adaptively selected based on the weighted matrix. The performance of the presented approach has been validated by implementing several experiments, and the obtained results demonstrate the efficiency of this approach compared to various existing dimensionality reduction techniques. Also, according to the experimental results, we can conclude that this approach can adaptively select the relevant spectral improving the semantic interpretation of HSI.

Keywords: band selection, dimensionality reduction, feature extraction, hyperspectral imagery, semantic interpretation

Procedia PDF Downloads 354
848 Experimental Investigation of Partially Premixed Laminar Methane/Air Co-Flow Flames Using Mach-Zehnder Interferometry

Authors: Misagh Irandoost Shahrestani, Mehdi Ashjaee, Shahrokh Zandieh Vakili

Abstract:

In this paper, partially premixed laminar methane/air co-flow flame is studied experimentally. Methane-air flame was established on an axisymmetric coannular burner. The fuel-air jet flows from the central tube while the secondary air flows from the region between the inner and the outer tube. The aim is to investigate the flame features and to develop a nonintrusive method for temperature measurement of methane/air partially premixed flame using Mach-Zehnder interferometry method. Different equivalence ratios and Reynolds numbers are considered. Flame generic visible appearance was also investigated and its various structures were studied. Three distinguished flame regimes were seen based on its appearance. A double flame structure can be seen for the equivalence ratio in the range of 1<Φ<2.1. By adding air to the mixture up to Φ=4 the flame has the characteristics of both premixed and non-premixed flames. Finally for 4<Φ<∞ the flame mainly becomes non-premixed like and the luminous sooting region on its tip is the obvious feature of this type of flames. The Mach-Zehnder method is used to obtain temperature field of a transparent fluid by means of index of refraction. Temperature obtained from optical techniques was compared with that of obtained from thermocouples in order to validate the results. Good agreement was observed for these two methods.

Keywords: flame structure, Mach-Zehnder interferometry, partially premixed flame, temperature field

Procedia PDF Downloads 481
847 Different Sampling Schemes for Semi-Parametric Frailty Model

Authors: Nursel Koyuncu, Nihal Ata Tutkun

Abstract:

Frailty model is a survival model that takes into account the unobserved heterogeneity for exploring the relationship between the survival of an individual and several covariates. In the recent years, proposed survival models become more complex and this feature causes convergence problems especially in large data sets. Therefore selection of sample from these big data sets is very important for estimation of parameters. In sampling literature, some authors have defined new sampling schemes to predict the parameters correctly. For this aim, we try to see the effect of sampling design in semi-parametric frailty model. We conducted a simulation study in R programme to estimate the parameters of semi-parametric frailty model for different sample sizes, censoring rates under classical simple random sampling and ranked set sampling schemes. In the simulation study, we used data set recording 17260 male Civil Servants aged 40–64 years with complete 10-year follow-up as population. Time to death from coronary heart disease is treated as a survival-time and age, systolic blood pressure are used as covariates. We select the 1000 samples from population using different sampling schemes and estimate the parameters. From the simulation study, we concluded that ranked set sampling design performs better than simple random sampling for each scenario.

Keywords: frailty model, ranked set sampling, efficiency, simple random sampling

Procedia PDF Downloads 211
846 Effects of Process Parameter Variation on the Surface Roughness of Rapid Prototyped Samples Using Design of Experiments

Authors: R. Noorani, K. Peerless, J. Mandrell, A. Lopez, R. Dalberto, M. Alzebaq

Abstract:

Rapid prototyping (RP) is an additive manufacturing technology used in industry that works by systematically depositing layers of working material to construct larger, computer-modeled parts. A key challenge associated with this technology is that RP parts often feature undesirable levels of surface roughness for certain applications. To combat this phenomenon, an experimental technique called Design of Experiments (DOE) can be employed during the growth procedure to statistically analyze which RP growth parameters are most influential to part surface roughness. Utilizing DOE to identify such factors is important because it is a technique that can be used to optimize a manufacturing process, which saves time, money, and increases product quality. In this study, a four-factor/two level DOE experiment was performed to investigate the effect of temperature, layer thickness, infill percentage, and infill speed on the surface roughness of RP prototypes. Samples were grown using the sixteen different possible growth combinations associated with a four-factor/two level study, and then the surface roughness data was gathered for each set of factors. After applying DOE statistical analysis to these data, it was determined that layer thickness played the most significant role in the prototype surface roughness.

Keywords: rapid prototyping, surface roughness, design of experiments, statistical analysis, factors and levels

Procedia PDF Downloads 262
845 Microscale observations of a gas cell wall rupture in bread dough during baking and confrontation to 2/3D Finite Element simulations of stress concentration

Authors: Kossigan Bernard Dedey, David Grenier, Tiphaine Lucas

Abstract:

Bread dough is often described as a dispersion of gas cells in a continuous gluten/starch matrix. The final bread crumb structure is strongly related to gas cell walls (GCWs) rupture during baking. At the end of proofing and during baking, part of the thinnest GCWs between expanding gas cells is reduced to a gluten film of about the size of a starch granule. When such size is reached gluten and starch granules must be considered as interacting phases in order to account for heterogeneities and appropriately describe GCW rupture. Among experimental investigations carried out to assess GCW rupture, no experimental work was performed to observe the GCW rupture in the baking conditions at GCW scale. In addition, attempts to numerically understand GCW rupture are usually not performed at the GCW scale and often considered GCWs as continuous. The most relevant paper that accounted for heterogeneities dealt with the gluten/starch interactions and their impact on the mechanical behavior of dough film. However, stress concentration in GCW was not discussed. In this study, both experimental and numerical approaches were used to better understand GCW rupture in bread dough during baking. Experimentally, a macro-scope placed in front of a two-chamber device was used to observe the rupture of a real GCW of 200 micrometers in thickness. Special attention was paid in order to mimic baking conditions as far as possible (temperature, gas pressure and moisture). Various differences in pressure between both sides of GCW were applied and different modes of fracture initiation and propagation in GCWs were observed. Numerically, the impact of gluten/starch interactions (cohesion or non-cohesion) and rheological moduli ratio on the mechanical behavior of GCW under unidirectional extension was assessed in 2D/3D. A non-linear viscoelastic and hyperelastic approach was performed to match the finite strain involved in GCW during baking. Stress concentration within GCW was identified. Simulated stresses concentration was discussed at the light of GCW failure observed in the device. The gluten/starch granule interactions and rheological modulus ratio were found to have a great effect on the amount of stress possibly reached in the GCW.

Keywords: dough, experimental, numerical, rupture

Procedia PDF Downloads 122
844 Visualization of Corrosion at Plate-Like Structures Based on Ultrasonic Wave Propagation Images

Authors: Aoqi Zhang, Changgil Lee Lee, Seunghee Park

Abstract:

A non-contact nondestructive technique using laser-induced ultrasonic wave generation method was applied to visualize corrosion damage at aluminum alloy plate structures. The ultrasonic waves were generated by a Nd:YAG pulse laser, and a galvanometer-based laser scanner was used to scan specific area at a target structure. At the same time, wave responses were measured at a piezoelectric sensor which was attached on the target structure. The visualization of structural damage was achieved by calculating logarithmic values of root mean square (RMS). Damage-sensitive feature was defined as the scattering characteristics of the waves that encounter corrosion damage. The corroded damage was artificially formed by hydrochloric acid. To observe the effect of the location where the corrosion was formed, the both sides of the plate were scanned with same scanning area. Also, the effect on the depth of the corrosion was considered as well as the effect on the size of the corrosion. The results indicated that the damages were successfully visualized for almost cases, whether the damages were formed at the front or back side. However, the damage could not be clearly detected because the depth of the corrosion was shallow. In the future works, it needs to develop signal processing algorithm to more clearly visualize the damage by improving signal-to-noise ratio.

Keywords: non-destructive testing, corrosion, pulsed laser scanning, ultrasonic waves, plate structure

Procedia PDF Downloads 300
843 PointNetLK-OBB: A Point Cloud Registration Algorithm with High Accuracy

Authors: Wenhao Lan, Ning Li, Qiang Tong

Abstract:

To improve the registration accuracy of a source point cloud and template point cloud when the initial relative deflection angle is too large, a PointNetLK algorithm combined with an oriented bounding box (PointNetLK-OBB) is proposed. In this algorithm, the OBB of a 3D point cloud is used to represent the macro feature of source and template point clouds. Under the guidance of the iterative closest point algorithm, the OBB of the source and template point clouds is aligned, and a mirror symmetry effect is produced between them. According to the fitting degree of the source and template point clouds, the mirror symmetry plane is detected, and the optimal rotation and translation of the source point cloud is obtained to complete the 3D point cloud registration task. To verify the effectiveness of the proposed algorithm, a comparative experiment was performed using the publicly available ModelNet40 dataset. The experimental results demonstrate that, compared with PointNetLK, PointNetLK-OBB improves the registration accuracy of the source and template point clouds when the initial relative deflection angle is too large, and the sensitivity of the initial relative position between the source point cloud and template point cloud is reduced. The primary contribution of this paper is the use of PointNetLK to avoid the non-convex problem of traditional point cloud registration and leveraging the regularity of the OBB to avoid the local optimization problem in the PointNetLK context.

Keywords: mirror symmetry, oriented bounding box, point cloud registration, PointNetLK-OBB

Procedia PDF Downloads 150
842 A Group Setting of IED in Microgrid Protection Management System

Authors: Jyh-Cherng Gu, Ming-Ta Yang, Chao-Fong Yan, Hsin-Yung Chung, Yung-Ruei Chang, Yih-Der Lee, Chen-Min Chan, Chia-Hao Hsu

Abstract:

There are a number of distributed generations (DGs) installed in microgrid, which may have diverse path and direction of power flow or fault current. The overcurrent protection scheme for the traditional radial type distribution system will no longer meet the needs of microgrid protection. Integrating the intelligent electronic device (IED) and a supervisory control and data acquisition (SCADA) with IEC 61850 communication protocol, the paper proposes a microgrid protection management system (MPMS) to protect power system from the fault. In the proposed method, the MPMS performs logic programming of each IED to coordinate their tripping sequence. The GOOSE message defined in IEC 61850 is used as the transmission information medium among IEDs. Moreover, to cope with the difference in fault current of microgrid between grid-connected mode and islanded mode, the proposed MPMS applies the group setting feature of IED to protect system and robust adaptability. Once the microgrid topology varies, the MPMS will recalculate the fault current and update the group setting of IED. Provided there is a fault, IEDs will isolate the fault at once. Finally, the Matlab/Simulink and Elipse Power Studio software are used to simulate and demonstrate the feasibility of the proposed method.

Keywords: IEC 61850, IED, group Setting, microgrid

Procedia PDF Downloads 461
841 Geology, Geomorphology and Genesis of Andarokh Karstic Cave, North-East Iran

Authors: Mojtaba Heydarizad

Abstract:

Andarokh basin is one of the main karstic regions in Khorasan Razavi province NE Iran. This basin is part of Kopeh-Dagh mega zone extending from Caspian Sea in the east to northern Afghanistan in the west. This basin is covered by Mozdooran Formation, Ngr evaporative formation and quaternary alluvium deposits in descending order of age. Mozdooran carbonate formation is notably karstified. The main surface karstic features in Mozdooran formation are Groove karren, Cleft karren, Rain pit, Rill karren, Tritt karren, Kamintza, Domes, and Table karren. In addition to surface features, deep karstic feature Andarokh Cave also exists in the region. Studying Ca, Mg, Mn, Sr, Fe concentration and Sr/Mn ratio in Mozdooran formation samples with distance to main faults and joints system using PCA analyses demonstrates intense meteoric digenesis role in controlling carbonate rock geochemistry. The karst evaluation in Andarokh basin varies from early stages 'deep seated karst' in Mesozoic to mature karstic system 'Exhumed karst' in quaternary period. Andarokh cave (the main cave in Andarokh basin) is rudimentary branch work consists of three passages of A, B and C and two entrances Andarokh and Sky.

Keywords: Andarokh basin, Andarokh cave, geochemical analyses, karst evaluation

Procedia PDF Downloads 154
840 Development of Alternative Fuels Technologies: Compressed Natural Gas Home Refueling Station

Authors: Szymon Kuczynski, Krystian Liszka, Mariusz Laciak, Andrii Oliinyk, Adam Szurlej

Abstract:

Compressed natural gas (CNG) represents an excellent compromise between the availability of a technology that is proven and relatively easy to use in many areas of the automotive industry and incurred costs. This fuel causes a lower corrosion effect due to the lower content of products causing the potential difference on the walls of the engine system. Natural gas powered vehicles (NGVs) do not emit any substances that can contaminate water or land. The absence of carcinogenic substances in gaseous fuel extends the life of the engine. In the longer term, it contributes positively to waste management as well as waste disposal. Popularization of propulsion systems powered by natural gas CNG positively affects the reduction of heavy duty transport. For these reasons, CNG as a fuel stimulates considerable interest around the world. Over the last few years, technologies related to use of natural gas as an engine fuel have been developed and improved. These solutions have evolved from the prototype phase to the industrial scale implementation. The widespread availability of gaseous fuels has led to the development of a technology that allows the CNG fuel to be refueled directly from the urban gas network to the vehicle tank (ie. HYGEN - CNGHRS). Home refueling installations, although they have been known for many years, are becoming increasingly important in the present day. The major obstacle in the sale of this technology was, until recently, quite high capital expenditure compared to the later benefits. Home refueling systems allow refueling vehicle tank, with full control of fuel costs and refueling time. CNG Home Refueling Stations (such as HYGEN) allow gas value chain to overcome the dogma that there is a lack of refueling infrastructure allowing companies in gas value chain to participate in transportation market. Technology is based on one stage hydraulic compressor (instead of multistage mechanical compressor technology) which provides the possibility to compress low pressure gas from distribution gas network to 200 bar for its further usage as a fuel for NGVs. This boosts revenues and profits of gas companies by expanding its presence in higher margin of energy sector.

Keywords: alternative fuels, CNG (compressed natural gas), CNG stations, NGVs (natural gas vehicles), gas value chain

Procedia PDF Downloads 200
839 Reed: An Approach Towards Quickly Bootstrapping Multilingual Acoustic Models

Authors: Bipasha Sen, Aditya Agarwal

Abstract:

Multilingual automatic speech recognition (ASR) system is a single entity capable of transcribing multiple languages sharing a common phone space. Performance of such a system is highly dependent on the compatibility of the languages. State of the art speech recognition systems are built using sequential architectures based on recurrent neural networks (RNN) limiting the computational parallelization in training. This poses a significant challenge in terms of time taken to bootstrap and validate the compatibility of multiple languages for building a robust multilingual system. Complex architectural choices based on self-attention networks are made to improve the parallelization thereby reducing the training time. In this work, we propose Reed, a simple system based on 1D convolutions which uses very short context to improve the training time. To improve the performance of our system, we use raw time-domain speech signals directly as input. This enables the convolutional layers to learn feature representations rather than relying on handcrafted features such as MFCC. We report improvement on training and inference times by atleast a factor of 4x and 7.4x respectively with comparable WERs against standard RNN based baseline systems on SpeechOcean's multilingual low resource dataset.

Keywords: convolutional neural networks, language compatibility, low resource languages, multilingual automatic speech recognition

Procedia PDF Downloads 123
838 Mechanism of Action of Troxerutin in Reducing Oxidative Stress

Authors: Nasrin Hosseinzad

Abstract:

Troxerutin, a trihydroxyethylated derived of rutin, is a flavonoid existing in tea, coffee, cereal grains, various fruits and vegetables have been conveyed to display radioprotective, antithrombotic, nephron-protective and hepato-protective possessions. Troxerutin, has been well-proved to utilize hepatoprotective assets. Troxerutin could upturn the resistance of hippocampal neurons alongside apoptosis by lessening the action of AChE and oxidative stress. Consequently, troxerutin may have advantageous properties in the administration of Alzheimer's disease and cancer. Troxerutin has been testified to have several welfares and medicinal stuffs. It could shelter the mouse kidney against d-gal-induced damage by refining renal utility, decreasing histopathologic changes, dropping ROS construction, reintroducing the activities of antioxidant enzymes and reducing DNA oxidative destruction. The DNA cleavage study clarifies that troxerutin showed DNA protection against hydroxyl radical persuaded DNA mutilation. Troxerutin uses anti-cancer effect in HuH-7 hepatocarcinoma cells conceivably through synchronized regulation of the molecular signalling pathways, Nrf2 and NF-κB. DNA binding at slight channel by troxerutin may have donated to feature breaks leading to improved radiation brought cell death. Furthermore, the mechanism principal the observed variance in the antioxidant activities of troxerutin and its esters was qualified to equally their free radical scavenging capabilities and dissemination on the cell membrane outward.

Keywords: troxerutin, DNA, oxidative stress, antioxidant, free radical

Procedia PDF Downloads 160
837 Modeling Optimal Lipophilicity and Drug Performance in Ligand-Receptor Interactions: A Machine Learning Approach to Drug Discovery

Authors: Jay Ananth

Abstract:

The drug discovery process currently requires numerous years of clinical testing as well as money just for a single drug to earn FDA approval. For drugs that even make it this far in the process, there is a very slim chance of receiving FDA approval, resulting in detrimental hurdles to drug accessibility. To minimize these inefficiencies, numerous studies have implemented computational methods, although few computational investigations have focused on a crucial feature of drugs: lipophilicity. Lipophilicity is a physical attribute of a compound that measures its solubility in lipids and is a determinant of drug efficacy. This project leverages Artificial Intelligence to predict the impact of a drug’s lipophilicity on its performance by accounting for factors such as binding affinity and toxicity. The model predicted lipophilicity and binding affinity in the validation set with very high R² scores of 0.921 and 0.788, respectively, while also being applicable to a variety of target receptors. The results expressed a strong positive correlation between lipophilicity and both binding affinity and toxicity. The model helps in both drug development and discovery, providing every pharmaceutical company with recommended lipophilicity levels for drug candidates as well as a rapid assessment of early-stage drugs prior to any testing, eliminating significant amounts of time and resources currently restricting drug accessibility.

Keywords: drug discovery, lipophilicity, ligand-receptor interactions, machine learning, drug development

Procedia PDF Downloads 111
836 Achieving Design-Stage Elemental Cost Planning Accuracy: Case Study of New Zealand

Authors: Johnson Adafin, James O. B. Rotimi, Suzanne Wilkinson, Abimbola O. Windapo

Abstract:

An aspect of client expenditure management that requires attention is the level of accuracy achievable in design-stage elemental cost planning. This has been a major concern for construction clients and practitioners in New Zealand (NZ). Pre-tender estimating inaccuracies are significantly influenced by the level of risk information available to estimators. Proper cost planning activities should ensure the production of a project’s likely construction costs (initial and final), and subsequent cost control activities should prevent unpleasant consequences of cost overruns, disputes and project abandonment. If risks were properly identified and priced at the design stage, observed variance between design-stage elemental cost plans (ECPs) and final tender sums (FTS) (initial contract sums) could be reduced. This study investigates the variations between design-stage ECPs and FTS of construction projects, with a view to identifying risk factors that are responsible for the observed variance. Data were sourced through interviews, and risk factors were identified by using thematic analysis. Access was obtained to project files from the records of study participants (consultant quantity surveyors), and document analysis was employed in complementing the responses from the interviews. Study findings revealed the discrepancies between ECPs and FTS in the region of -14% and +16%. It is opined in this study that the identified risk factors were responsible for the variability observed. The values obtained from the analysis would enable greater accuracy in the forecast of FTS by Quantity Surveyors. Further, whilst inherent risks in construction project developments are observed globally, these findings have important ramifications for construction projects by expanding existing knowledge on what is needed for reasonable budgetary performance and successful delivery of construction projects. The findings contribute significantly to the study by providing quantitative confirmation to justify the theoretical conclusions generated in the literature from around the world. This therefore adds to and consolidates existing knowledge.

Keywords: accuracy, design-stage, elemental cost plan, final tender sum

Procedia PDF Downloads 268
835 Fused Structure and Texture (FST) Features for Improved Pedestrian Detection

Authors: Hussin K. Ragb, Vijayan K. Asari

Abstract:

In this paper, we present a pedestrian detection descriptor called Fused Structure and Texture (FST) features based on the combination of the local phase information with the texture features. Since the phase of the signal conveys more structural information than the magnitude, the phase congruency concept is used to capture the structural features. On the other hand, the Center-Symmetric Local Binary Pattern (CSLBP) approach is used to capture the texture information of the image. The dimension less quantity of the phase congruency and the robustness of the CSLBP operator on the flat images, as well as the blur and illumination changes, lead the proposed descriptor to be more robust and less sensitive to the light variations. The proposed descriptor can be formed by extracting the phase congruency and the CSLBP values of each pixel of the image with respect to its neighborhood. The histogram of the oriented phase and the histogram of the CSLBP values for the local regions in the image are computed and concatenated to construct the FST descriptor. Several experiments were conducted on INRIA and the low resolution DaimlerChrysler datasets to evaluate the detection performance of the pedestrian detection system that is based on the FST descriptor. A linear Support Vector Machine (SVM) is used to train the pedestrian classifier. These experiments showed that the proposed FST descriptor has better detection performance over a set of state of the art feature extraction methodologies.

Keywords: pedestrian detection, phase congruency, local phase, LBP features, CSLBP features, FST descriptor

Procedia PDF Downloads 488
834 Taylor’s Law and Relationship between Life Expectancy at Birth and Variance in Age at Death in Period Life Table

Authors: David A. Swanson, Lucky M. Tedrow

Abstract:

Taylor’s Law is a widely observed empirical pattern that relates variances to means in sets of non-negative measurements via an approximate power function, which has found application to human mortality. This study adds to this research by showing that Taylor’s Law leads to a model that reasonably describes the relationship between life expectancy at birth (e0, which also is equal to mean age at death in a life table) and variance at age of death in seven World Bank regional life tables measured at two points in time, 1970 and 2000. Using as a benchmark a non-random sample of four Japanese female life tables covering the period from 1950 to 2004, the study finds that the simple linear model provides reasonably accurate estimates of variance in age at death in a life table from e0, where the latter range from 60.9 to 85.59 years. Employing 2017 life tables from the Human Mortality Database, the simple linear model is used to provide estimates of variance at age in death for six countries, three of which have high e0 values and three of which have lower e0 values. The paper provides a substantive interpretation of Taylor’s Law relative to e0 and concludes by arguing that reasonably accurate estimates of variance in age at death in a period life table can be calculated using this approach, which also can be used where e0 itself is estimated rather than generated through the construction of a life table, a useful feature of the model.

Keywords: empirical pattern, mean age at death in a life table, mean age of a stationary population, stationary population

Procedia PDF Downloads 330
833 Optoelectronic Hardware Architecture for Recurrent Learning Algorithm in Image Processing

Authors: Abdullah Bal, Sevdenur Bal

Abstract:

This paper purposes a new type of hardware application for training of cellular neural networks (CNN) using optical joint transform correlation (JTC) architecture for image feature extraction. CNNs require much more computation during the training stage compare to test process. Since optoelectronic hardware applications offer possibility of parallel high speed processing capability for 2D data processing applications, CNN training algorithm can be realized using Fourier optics technique. JTC employs lens and CCD cameras with laser beam that realize 2D matrix multiplication and summation in the light speed. Therefore, in the each iteration of training, JTC carries more computation burden inherently and the rest of mathematical computation realized digitally. The bipolar data is encoded by phase and summation of correlation operations is realized using multi-object input joint images. Overlapping properties of JTC are then utilized for summation of two cross-correlations which provide less computation possibility for training stage. Phase-only JTC does not require data rearrangement, electronic pre-calculation and strict system alignment. The proposed system can be incorporated simultaneously with various optical image processing or optical pattern recognition techniques just in the same optical system.

Keywords: CNN training, image processing, joint transform correlation, optoelectronic hardware

Procedia PDF Downloads 506
832 Characteristics and Key Exploration Directions of Gold Deposits in China

Authors: Bin Wang, Yong Xu, Honggang Qu, Rongmei Liu, Zhenji Gao

Abstract:

Based on the geodynamic environment, basic geological characteristics of minerals and so on, gold deposits in China are divided into 11 categories, of which tectonic fracture altered rock, mid-intrudes and contact zone, micro-fine disseminated and continental volcanic types are the main prospecting kinds. The metallogenic age of gold deposits in China is dominated by the Mesozoic and Cenozoic. According to the geotectonic units, geological evolution, geological conditions, spatial distribution, gold deposits types, metallogenic factors etc., 42 gold concentration areas are initially determined and have a concentrated distribution feature. On the basis of the gold exploration density, gold concentration areas are divided into high, medium and low level areas. High ones are mainly distributed in the central and eastern regions. 93.04% of the gold exploration drillings are within 500 meters, but there are some problems, such as less and shallower of drilling verification etc.. The paper discusses the resource potentials of gold deposits and proposes the future prospecting directions and suggestions. The deep and periphery of old mines in the central and eastern regions and western area, especially in Xinjiang and Qinghai, will be the future key prospecting one and have huge potential gold reserves. If the exploration depth is extended to 2,000 meters shallow, the gold resources will double.

Keywords: gold deposits, gold deposits types, gold concentration areas, prospecting, resource potentiality

Procedia PDF Downloads 77
831 A Review of Toxic and Non-Toxic Cyanobacteria Species Occurrence in Water Supplies Destined for Maize Meal Production Process: A Case Study of Vhembe District

Authors: M. Mutoti, J. Gumbo, A. Jideani

Abstract:

Cyanobacteria or blue green algae have been part of the human diet for thousands of years. Cyanobacteria can multiply quickly in surface waters and form blooms when favorable conditions prevail, such as high temperature, intense light, high pH, and increased availability of nutrients, especially phosphorous and nitrogen, artificially released by anthropogenic activities. Consumption of edible cyanotoxins such as Spirulina may reduce risks of cataracts and age related macular degeneration. Sulfate polysaccharides exhibit antitumor, anticoagulant, anti-mutagenic, anti-inflammatory, antimicrobial, and even antiviral activity against HIV, herpes, and hepatitis. In humans, exposure to cyanotoxins can occur in various ways; however, the oral route is the most important. This is mainly through drinking water, or by eating contaminated foods; it may even involve ingesting water during recreational activities. This paper seeks to present a review on cyanobacteria/cyanotoxin contamination of water and food and implications for human health. In particular, examining the water quality used during maize seed that passes through mill grinding processes. In order to fulfil the objective, this paper starts with the theoretical framework on cyanobacteria contamination of food that will guide review of the present paper. A number of methods for decontaminating cyanotoxins in food is currently available. Therefore, physical, chemical, and biological methods for treating cyanotoxins are reviewed and compared. Furthermore, methods that are utilized for detecting and identifying cyanobacteria present in water and food were also informed in this review. This review has indicated various routes through which humans can be exposed to cyanotoxins. Accumulation of cyanotoxins, mainly microcystins, in food has raised an awareness of the importance of food as microcystins exposure route to human body. Therefore, this review demonstrates the importance of expanding research on cyanobacteria/cyanotoxin contamination of water and food for water treatment and water supply management, with focus on examining water for domestic use. This will help providing information regarding the prevention or minimization of contamination of water and food, and also reduction or removal of contamination through treatment processes and prevention of recontamination in the distribution system.

Keywords: biofilm, cyanobacteria, cyanotoxin, food contamination

Procedia PDF Downloads 160
830 Effective Parameter Selection for Audio-Based Music Mood Classification for Christian Kokborok Song: A Regression-Based Approach

Authors: Sanchali Das, Swapan Debbarma

Abstract:

Music mood classification is developing in both the areas of music information retrieval (MIR) and natural language processing (NLP). Some of the Indian languages like Hindi English etc. have considerable exposure in MIR. But research in mood classification in regional language is very less. In this paper, powerful audio based feature for Kokborok Christian song is identified and mood classification task has been performed. Kokborok is an Indo-Burman language especially spoken in the northeastern part of India and also some other countries like Bangladesh, Myanmar etc. For performing audio-based classification task, useful audio features are taken out by jMIR software. There are some standard audio parameters are there for the audio-based task but as known to all that every language has its unique characteristics. So here, the most significant features which are the best fit for the database of Kokborok song is analysed. The regression-based model is used to find out the independent parameters that act as a predictor and predicts the dependencies of parameters and shows how it will impact on overall classification result. For classification WEKA 3.5 is used, and selected parameters create a classification model. And another model is developed by using all the standard audio features that are used by most of the researcher. In this experiment, the essential parameters that are responsible for effective audio based mood classification and parameters that do not significantly change for each of the Christian Kokborok songs are analysed, and a comparison is also shown between the two above model.

Keywords: Christian Kokborok song, mood classification, music information retrieval, regression

Procedia PDF Downloads 221
829 Design and Development of 5-DOF Color Sorting Manipulator for Industrial Applications

Authors: Atef A. Ata, Sohair F. Rezeka, Ahmed El-Shenawy, Mohammed Diab

Abstract:

Image processing in today’s world grabs massive attentions as it leads to possibilities of broaden application in many fields of high technology. The real challenge is how to improve existing sorting system applications which consists of two integrated stations of processing and handling with a new image processing feature. Existing color sorting techniques use a set of inductive, capacitive, and optical sensors to differentiate object color. This research presents a mechatronics color sorting system solution with the application of image processing. A 5-DOF robot arm is designed and developed with pick and place operation to be main part of the color sorting system. Image processing procedure senses the circular objects in an image captured in real time by a webcam attached at the end-effector then extracts color and position information out of it. This information is passed as a sequence of sorting commands to the manipulator that has pick-and-place mechanism. Performance analysis proves that this color based object sorting system works very accurate under ideal condition in term of adequate illumination, circular objects shape and color. The circular objects tested for sorting are red, green and blue. For non-ideal condition, such as unspecified color the accuracy reduces to 80%.

Keywords: robotics manipulator, 5-DOF manipulator, image processing, color sorting, pick-and-place

Procedia PDF Downloads 374
828 Establishment of a Classifier Model for Early Prediction of Acute Delirium in Adult Intensive Care Unit Using Machine Learning

Authors: Pei Yi Lin

Abstract:

Objective: The objective of this study is to use machine learning methods to build an early prediction classifier model for acute delirium to improve the quality of medical care for intensive care patients. Background: Delirium is a common acute and sudden disturbance of consciousness in critically ill patients. After the occurrence, it is easy to prolong the length of hospital stay and increase medical costs and mortality. In 2021, the incidence of delirium in the intensive care unit of internal medicine was as high as 59.78%, which indirectly prolonged the average length of hospital stay by 8.28 days, and the mortality rate is about 2.22% in the past three years. Therefore, it is expected to build a delirium prediction classifier through big data analysis and machine learning methods to detect delirium early. Method: This study is a retrospective study, using the artificial intelligence big data database to extract the characteristic factors related to delirium in intensive care unit patients and let the machine learn. The study included patients aged over 20 years old who were admitted to the intensive care unit between May 1, 2022, and December 31, 2022, excluding GCS assessment <4 points, admission to ICU for less than 24 hours, and CAM-ICU evaluation. The CAMICU delirium assessment results every 8 hours within 30 days of hospitalization are regarded as an event, and the cumulative data from ICU admission to the prediction time point are extracted to predict the possibility of delirium occurring in the next 8 hours, and collect a total of 63,754 research case data, extract 12 feature selections to train the model, including age, sex, average ICU stay hours, visual and auditory abnormalities, RASS assessment score, APACHE-II Score score, number of invasive catheters indwelling, restraint and sedative and hypnotic drugs. Through feature data cleaning, processing and KNN interpolation method supplementation, a total of 54595 research case events were extracted to provide machine learning model analysis, using the research events from May 01 to November 30, 2022, as the model training data, 80% of which is the training set for model training, and 20% for the internal verification of the verification set, and then from December 01 to December 2022 The CU research event on the 31st is an external verification set data, and finally the model inference and performance evaluation are performed, and then the model has trained again by adjusting the model parameters. Results: In this study, XG Boost, Random Forest, Logistic Regression, and Decision Tree were used to analyze and compare four machine learning models. The average accuracy rate of internal verification was highest in Random Forest (AUC=0.86), and the average accuracy rate of external verification was in Random Forest and XG Boost was the highest, AUC was 0.86, and the average accuracy of cross-validation was the highest in Random Forest (ACC=0.77). Conclusion: Clinically, medical staff usually conduct CAM-ICU assessments at the bedside of critically ill patients in clinical practice, but there is a lack of machine learning classification methods to assist ICU patients in real-time assessment, resulting in the inability to provide more objective and continuous monitoring data to assist Clinical staff can more accurately identify and predict the occurrence of delirium in patients. It is hoped that the development and construction of predictive models through machine learning can predict delirium early and immediately, make clinical decisions at the best time, and cooperate with PADIS delirium care measures to provide individualized non-drug interventional care measures to maintain patient safety, and then Improve the quality of care.

Keywords: critically ill patients, machine learning methods, delirium prediction, classifier model

Procedia PDF Downloads 75
827 Define Immersive Need Level for Optimal Adoption of Virtual Words with BIM Methodology

Authors: Simone Balin, Cecilia M. Bolognesi, Paolo Borin

Abstract:

In the construction industry, there is a large amount of data and interconnected information. To manage this information effectively, a transition to the immersive digitization of information processes is required. This transition is important to improve knowledge circulation, product quality, production sustainability and user satisfaction. However, there is currently a lack of a common definition of immersion in the construction industry, leading to misunderstandings and limiting the use of advanced immersive technologies. Furthermore, the lack of guidelines and a common vocabulary causes interested actors to abandon the virtual world after the first collaborative steps. This research aims to define the optimal use of immersive technologies in the AEC sector, particularly for collaborative processes based on the BIM methodology. Additionally, the research focuses on creating classes and levels to structure and define guidelines and a vocabulary for the use of the " Immersive Need Level." This concept, matured by recent technological advancements, aims to enable a broader application of state-of-the-art immersive technologies, avoiding misunderstandings, redundancies, or paradoxes. While the concept of "Informational Need Level" has been well clarified with the recent UNI EN 17412-1:2021 standard, when it comes to immersion, current regulations and literature only provide some hints about the technology and related equipment, leaving the procedural approach and the user's free interpretation completely unexplored. Therefore, once the necessary knowledge and information are acquired (Informational Need Level), it is possible to transition to an Immersive Need Level that involves the practical application of the acquired knowledge, exploring scenarios and solutions in a more thorough and detailed manner, with user involvement, via different immersion scales, in the design, construction or management process of a building or infrastructure. The need for information constitutes the basis for acquiring relevant knowledge and information, while the immersive need can manifest itself later, once a solid information base has been solidified, using the senses and developing immersive awareness. This new approach could solve the problem of inertia among AEC industry players in adopting and experimenting with new immersive technologies, expanding collaborative iterations and the range of available options.

Keywords: AECindustry, immersive technology (IMT), virtual reality, augmented reality, building information modeling (BIM), decision making, collaborative process, information need level, immersive level of need

Procedia PDF Downloads 99
826 Design and Control of a Knee Rehabilitation Device Using an MR-Fluid Brake

Authors: Mina Beheshti, Vida Shams, Mojtaba Esfandiari, Farzaneh Abdollahi, Abdolreza Ohadi

Abstract:

Most of the people who survive a stroke need rehabilitation tools to regain their mobility. The core function of these devices is a brake actuator. The goal of this study is to design and control a magnetorheological brake which can be used as a rehabilitation tool. In fact, the fluid used in this brake is called magnetorheological fluid or MR that properties can change by variation of the magnetic field. The braking properties can be set as control by using this feature of the fluid. In this research, different MR brake designs are first introduced in each design, and the dimensions of the brake have been determined based on the required torque for foot movement. To calculate the brake dimensions, it is assumed that the shear stress distribution in the fluid is uniform and the fluid is in its saturated state. After designing the rehabilitation brake, the mathematical model of the healthy movement of a healthy person is extracted. Due to the nonlinear nature of the system and its variability, various adaptive controllers, neural networks, and robust have been implemented to estimate the parameters and control the system. After calculating torque and control current, the best type of controller in terms of error and control current has been selected. Finally, this controller is implemented on the experimental data of the patient's movements, and the control current is calculated to achieve the desired torque and motion.

Keywords: rehabilitation, magnetorheological fluid, knee, brake, adaptive control, robust control, neural network control, torque control

Procedia PDF Downloads 151
825 Research on the Spatial Organization and Collaborative Innovation of Innovation Corridors from the Perspective of Ecological Niche: A Case Study of Seven Municipal Districts in Jiangsu Province, China

Authors: Weikang Peng

Abstract:

The innovation corridor is an important spatial carrier to promote regional collaborative innovation, and its development process is the spatial re-organization process of regional innovation resources. This paper takes the Nanjing-Zhenjiang G312 Industrial Innovation Corridor, which involves seven municipal districts in Jiangsu Province, as empirical evidence. Based on multi-source spatial big data in 2010, 2016, and 2022, this paper applies triangulated irregular network (TIN), head/tail breaks, regional innovation ecosystem (RIE) niche fitness evaluation model, and social network analysis to carry out empirical research on the spatial organization and functional structural evolution characteristics of innovation corridors and their correlation with the structural evolution of collaborative innovation network. The results show, first, the development of innovation patches in the corridor has fractal characteristics in time and space and tends to be multi-center and cluster layout along the Nanjing Bypass Highway and National Highway G312. Second, there are large differences in the spatial distribution pattern of niche fitness in the corridor in various dimensions, and the niche fitness of innovation patches along the highway has increased significantly. Third, the scale of the collaborative innovation network in the corridor is expanding fast. The core of the network is shifting from the main urban area to the periphery of the city along the highway, with small-world and hierarchical levels, and the core-edge network structure is highlighted. With the development of the Innovation Corridor, the main collaborative mode in the corridor is changing from collaboration within innovation patches to collaboration between innovation patches, and innovation patches with high ecological suitability tend to be the active areas of collaborative innovation. Overall, polycentric spatial layout, graded functional structure, diversified innovation clusters, and differentiated environmental support play an important role in effectively constructing collaborative innovation linkages and the stable expansion of the scale of collaborative innovation within the innovation corridor.

Keywords: innovation corridor development, spatial structure, niche fitness evaluation model, head/tail breaks, innovation network

Procedia PDF Downloads 20
824 Optimizing PharmD Education: Quantifying Curriculum Complexity to Address Student Burnout and Cognitive Overload

Authors: Frank Fan

Abstract:

PharmD (Doctor of Pharmacy) education has confronted an increasing challenge — curricular overload, a phenomenon resulting from the expansion of curricular requirements, as PharmD education strives to produce graduates who are practice-ready. The aftermath of the global pandemic has amplified the need for healthcare professionals, leading to a growing trend of assigning more responsibilities to them to address the global healthcare shortage. For instance, the pharmacist’s role has expanded to include not only compounding and distributing medication but also providing clinical services, including minor ailments management, patient counselling and vaccination. Consequently, PharmD programs have responded by continually expanding their curricula adding more requirements. While these changes aim to enhance the education and training of future professionals, they have also led to unintended consequences, including curricular overload, student burnout, and a potential decrease in program quality. To address the issue and ensure program quality, there is a growing need for evidence-based curriculum reforms. My research seeks to integrate Cognitive Load Theory, emerging machine learning algorithms within artificial intelligence (AI), and statistical approaches to develop a quantitative framework for optimizing curriculum design within the PharmD program at the University of Toronto, the largest PharmD program within Canada, to provide quantification and measurement of issues that currently are only discussed in terms of anecdote rather than data. This research will serve as a guide for curriculum planners, administrators, and educators, aiding in the comprehension of how the pharmacy degree program compares to others within and beyond the field of pharmacy. It will also shed light on opportunities to reduce the curricular load while maintaining its quality and rigor. Given that pharmacists constitute the third-largest healthcare workforce, their education shares similarities and challenges with other health education programs. Therefore, my evidence-based, data-driven curriculum analysis framework holds significant potential for training programs in other healthcare professions, including medicine, nursing, and physiotherapy.

Keywords: curriculum, curriculum analysis, health professions education, reflective writing, machine learning

Procedia PDF Downloads 61
823 Protecting the Health of Astronauts: Enhancing Occupational Health Monitoring and Surveillance for Former NASA Astronauts to Understand Long-Term Outcomes of Spaceflight-Related Exposures

Authors: Meredith Rossi, Lesley Lee, Mary Wear, Mary Van Baalen, Bradley Rhodes

Abstract:

The astronaut community is unique, and may be disproportionately exposed to occupational hazards not commonly seen in other communities. The extent to which the demands of the astronaut occupation and exposure to spaceflight-related hazards affect the health of the astronaut population over the life course is not completely known. A better understanding of the individual, population, and mission impacts of astronaut occupational exposures is critical to providing clinical care, targeting occupational surveillance efforts, and planning for future space exploration. The ability to characterize the risk of latent health conditions is a significant component of this understanding. Provision of health screening services to active and former astronauts ensures individual, mission, and community health and safety. Currently, the NASA-Johnson Space Center (JSC) Flight Medicine Clinic (FMC) provides extensive medical monitoring to active astronauts throughout their careers. Upon retirement, astronauts may voluntarily return to the JSC FMC for an annual preventive exam. However, current retiree monitoring includes only selected screening tests, representing an opportunity for augmentation. The potential long-term health effects of spaceflight demand an expanded framework of testing for former astronauts. The need is two-fold: screening tests widely recommended for other aging populations are necessary to rule out conditions resulting from the natural aging process (e.g., colonoscopy, mammography); and expanded monitoring will increase NASA’s ability to better characterize conditions resulting from astronaut occupational exposures. To meet this need, NASA has begun an extensive exploration of the overall approach, cost, and policy implications of expanding the medical monitoring of former NASA astronauts under the Astronaut Occupational Health program. Increasing the breadth of monitoring services will ultimately enrich the existing evidence base of occupational health risks to astronauts. Such an expansion would therefore improve the understanding of the health of the astronaut population as a whole, and the ability to identify, mitigate, and manage such risks in preparation for deep space exploration missions.

Keywords: astronaut, long-term health, NASA, occupational health, surveillance

Procedia PDF Downloads 533
822 Anomaly Detection in a Data Center with a Reconstruction Method Using a Multi-Autoencoders Model

Authors: Victor Breux, Jérôme Boutet, Alain Goret, Viviane Cattin

Abstract:

Early detection of anomalies in data centers is important to reduce downtimes and the costs of periodic maintenance. However, there is little research on this topic and even fewer on the fusion of sensor data for the detection of abnormal events. The goal of this paper is to propose a method for anomaly detection in data centers by combining sensor data (temperature, humidity, power) and deep learning models. The model described in the paper uses one autoencoder per sensor to reconstruct the inputs. The auto-encoders contain Long-Short Term Memory (LSTM) layers and are trained using the normal samples of the relevant sensors selected by correlation analysis. The difference signal between the input and its reconstruction is then used to classify the samples using feature extraction and a random forest classifier. The data measured by the sensors of a data center between January 2019 and May 2020 are used to train the model, while the data between June 2020 and May 2021 are used to assess it. Performances of the model are assessed a posteriori through F1-score by comparing detected anomalies with the data center’s history. The proposed model outperforms the state-of-the-art reconstruction method, which uses only one autoencoder taking multivariate sequences and detects an anomaly with a threshold on the reconstruction error, with an F1-score of 83.60% compared to 24.16%.

Keywords: anomaly detection, autoencoder, data centers, deep learning

Procedia PDF Downloads 194