Search results for: whale optimization algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6094

Search results for: whale optimization algorithm

664 Astronomical Object Classification

Authors: Alina Muradyan, Lina Babayan, Arsen Nanyan, Gohar Galstyan, Vigen Khachatryan

Abstract:

We present a photometric method for identifying stars, galaxies and quasars in multi-color surveys, which uses a library of ∼> 65000 color templates for comparison with observed objects. The method aims for extracting the information content of object colors in a statistically correct way, and performs a classification as well as a redshift estimation for galaxies and quasars in a unified approach based on the same probability density functions. For the redshift estimation, we employ an advanced version of the Minimum Error Variance estimator which determines the redshift error from the redshift dependent probability density function itself. The method was originally developed for the Calar Alto Deep Imaging Survey (CADIS), but is now used in a wide variety of survey projects. We checked its performance by spectroscopy of CADIS objects, where the method provides high reliability (6 errors among 151 objects with R < 24), especially for the quasar selection, and redshifts accurate within σz ≈ 0.03 for galaxies and σz ≈ 0.1 for quasars. For an optimization of future survey efforts, a few model surveys are compared, which are designed to use the same total amount of telescope time but different sets of broad-band and medium-band filters. Their performance is investigated by Monte-Carlo simulations as well as by analytic evaluation in terms of classification and redshift estimation. If photon noise were the only error source, broad-band surveys and medium-band surveys should perform equally well, as long as they provide the same spectral coverage. In practice, medium-band surveys show superior performance due to their higher tolerance for calibration errors and cosmic variance. Finally, we discuss the relevance of color calibration and derive important conclusions for the issues of library design and choice of filters. The calibration accuracy poses strong constraints on an accurate classification, which are most critical for surveys with few, broad and deeply exposed filters, but less severe for surveys with many, narrow and less deep filters.

Keywords: VO, ArVO, DFBS, FITS, image processing, data analysis

Procedia PDF Downloads 80
663 3D Geological Modeling and Engineering Geological Characterization of Shallow Subsurface Soil and Rock of Addis Ababa, Ethiopia

Authors: Biruk Wolde, Atalay Ayele, Yonatan Garkabo, Trufat Hailmariam, Zemenu Germewu

Abstract:

A comprehensive three-dimensional (3D) geological modeling and engineering geological characterization of shallow subsurface soils and rocks are essential for a wide range of geotechnical and seismological engineering applications, particularly in urban environments. The spatial distribution and geological variation of the shallow subsurface of Addis Ababa city have not been studied so far in terms of geological and geotechnical modeling. This study aims at the construction of a 3D geological model, as well as provides awareness into the engineering geological characteristics of shallow subsurface soil and rock of Addis Ababa city. The 3D geological model was constructed by using more than 1500 geotechnical boreholes, well-drilling data, and geological maps. A well-known geostatistical kriging 3D interpolation algorithm was applied to visualize the spatial distribution and geological variation of the shallow subsurface. Due to the complex nature of geological formations, vertical and lateral variation of the geological profiles horizons-solid command has been selected via the Groundwater Modelling System (GMS) graphical user interface software. For the engineering geological characterization of typical soils and rocks, both index and engineering laboratory tests have been used. The geotechnical properties of soil and rocks vary from place to place due to the uneven nature of subsurface formations observed in the study areas. The constructed model ascertains the thickness, extent, and 3D distribution of the important geological units of the city. This study is the first comprehensive research work on 3D geological modeling and subsurface characterization of soils and rocks in Addis Ababa city, and the outcomes will be important for further future research on subsurface conditions in the city. Furthermore, these findings provide a reference for developing a geo-database for the city.

Keywords: 3d geological modeling, addis ababa, engineering geology, geostatistics, horizons-solid

Procedia PDF Downloads 99
662 Understanding the Semantic Network of Tourism Studies in Taiwan by Using Bibliometrics Analysis

Authors: Chun-Min Lin, Yuh-Jen Wu, Ching-Ting Chung

Abstract:

The formulation of tourism policies requires objective academic research and evidence as support, especially research from local academia. Taiwan is a small island, and its economic growth relies heavily on tourism revenue. Taiwanese government has been devoting to the promotion of the tourism industry over the past few decades. Scientific research outcomes by Taiwanese scholars may and will help lay the foundations for drafting future tourism policy by the government. In this study, a total of 120 full journal articles published between 2008 and 2016 from the Journal of Tourism and Leisure Studies (JTSL) were examined to explore the scientific research trend of tourism study in Taiwan. JTSL is one of the most important Taiwanese journals in the tourism discipline which focuses on tourism-related issues and uses traditional Chinese as the study language. The method of co-word analysis from bibliometrics approaches was employed for semantic analysis in this study. When analyzing Chinese words and phrases, word segmentation analysis is a crucial step. It must be carried out initially and precisely in order to obtain meaningful word or word chunks for further frequency calculation. A word segmentation system basing on N-gram algorithm was developed in this study to conduct semantic analysis, and 100 groups of meaningful phrases with the highest recurrent rates were located. Subsequently, co-word analysis was employed for semantic classification. The results showed that the themes of tourism research in Taiwan in recent years cover the scope of tourism education, environmental protection, hotel management, information technology, and senior tourism. The results can give insight on the related issues and serve as a reference for tourism-related policy making and follow-up research.

Keywords: bibliometrics, co-word analysis, word segmentation, tourism research, policy

Procedia PDF Downloads 229
661 Fuzzy Decision Making to the Construction Project Management: Glass Facade Selection

Authors: Katarina Rogulj, Ivana Racetin, Jelena Kilic

Abstract:

In this study, the fuzzy logic approach (FLA) was developed for construction project management (CPM) under uncertainty and duality. The focus was on decision making in selecting the type of the glass facade for a residential-commercial building in the main design. The adoption of fuzzy sets was capable of reflecting construction managers’ reliability level over subjective judgments, and thus the robustness of the system can be achieved. An α-cuts method was utilized for discretizing the fuzzy sets in FLA. This method can communicate all uncertain information in the optimization process, taking into account the values of this information. Furthermore, FLA provides in-depth analyses of diverse policy scenarios that are related to various levels of economic aspects when it comes to the construction projects' valid decision making. The developed approach is applied to CPM to demonstrate its applicability. Analyzing the materials of glass facades, variants were defined. The development of the FLA for the CPM included relevant construction projec'ts stakeholders that were involved in the criteria definition to evaluate each variant. Using fuzzy Decision-Making Trial and Evaluation Laboratory Method (DEMATEL) comparison of the glass facade was conducted. This way, a rank, according to the priorities for inclusion into the main design, of variants is obtained. The concept was tested on a residential-commercial building in the city of Rijeka, Croatia. The newly developed methodology was then compared with the existing one. The aim of the research was to define an approach that will improve current judgments and decisions when it comes to the material selection of buildings facade as one of the most important architectural and engineering tasks in the main design. The advantage of the new methodology compared to the old one is that it includes the subjective side of the managers’ decisions, as an inevitable factor in each decision making. The proposed approach can help construction projects managers to identify the desired type of glass facade according to their preference and practical conditions, as well as facilitate in-depth analyses of tradeoffs between economic efficiency and architectural design.

Keywords: construction projects management, DEMATEL, fuzzy logic approach, glass façade selection

Procedia PDF Downloads 137
660 FlameCens: Visualization of Expressive Deviations in Music Performance

Authors: Y. Trantafyllou, C. Alexandraki

Abstract:

Music interpretation accounts to the way musicians shape their performance by deliberately deviating from composers’ intentions, which are commonly communicated via some form of music transcription, such as a music score. For transcribed and non-improvised music, music expression is manifested by introducing subtle deviations in tempo, dynamics and articulation during the evolution of performance. This paper presents an application, named FlameCens, which, given two recordings of the same piece of music, presumably performed by different musicians, allow visualising deviations in tempo and dynamics during playback. The application may also compare a certain performance to the music score of that piece (i.e. MIDI file), which may be thought of as an expression-neutral representation of that piece, hence depicting the expressive queues employed by certain performers. FlameCens uses the Dynamic Time Warping algorithm to compare two audio sequences, based on CENS (Chroma Energy distribution Normalized Statistics) audio features. Expressive deviations are illustrated in a moving flame, which is generated by an animation of particles. The length of the flame is mapped to deviations in dynamics, while the slope of the flame is mapped to tempo deviations so that faster tempo changes the slope to the right and slower tempo changes the slope to the left. Constant slope signifies no tempo deviation. The detected deviations in tempo and dynamics can be additionally recorded in a text file, which allows for offline investigation. Moreover, in the case of monophonic music, the color of particles is used to convey the pitch of the notes during performance. FlameCens has been implemented in Python and it is openly available via GitHub. The application has been experimentally validated for different music genres including classical, contemporary, jazz and popular music. These experiments revealed that FlameCens can be a valuable tool for music specialists (i.e. musicians or musicologists) to investigate the expressive performance strategies employed by different musicians, as well as for music audience to enhance their listening experience.

Keywords: audio synchronization, computational music analysis, expressive music performance, information visualization

Procedia PDF Downloads 131
659 Lung Cancer Detection and Multi Level Classification Using Discrete Wavelet Transform Approach

Authors: V. Veeraprathap, G. S. Harish, G. Narendra Kumar

Abstract:

Uncontrolled growth of abnormal cells in the lung in the form of tumor can be either benign (non-cancerous) or malignant (cancerous). Patients with Lung Cancer (LC) have an average of five years life span expectancy provided diagnosis, detection and prediction, which reduces many treatment options to risk of invasive surgery increasing survival rate. Computed Tomography (CT), Positron Emission Tomography (PET), and Magnetic Resonance Imaging (MRI) for earlier detection of cancer are common. Gaussian filter along with median filter used for smoothing and noise removal, Histogram Equalization (HE) for image enhancement gives the best results without inviting further opinions. Lung cavities are extracted and the background portion other than two lung cavities is completely removed with right and left lungs segmented separately. Region properties measurements area, perimeter, diameter, centroid and eccentricity measured for the tumor segmented image, while texture is characterized by Gray-Level Co-occurrence Matrix (GLCM) functions, feature extraction provides Region of Interest (ROI) given as input to classifier. Two levels of classifications, K-Nearest Neighbor (KNN) is used for determining patient condition as normal or abnormal, while Artificial Neural Networks (ANN) is used for identifying the cancer stage is employed. Discrete Wavelet Transform (DWT) algorithm is used for the main feature extraction leading to best efficiency. The developed technology finds encouraging results for real time information and on line detection for future research.

Keywords: artificial neural networks, ANN, discrete wavelet transform, DWT, gray-level co-occurrence matrix, GLCM, k-nearest neighbor, KNN, region of interest, ROI

Procedia PDF Downloads 153
658 Classifying Affective States in Virtual Reality Environments Using Physiological Signals

Authors: Apostolos Kalatzis, Ashish Teotia, Vishnunarayan Girishan Prabhu, Laura Stanley

Abstract:

Emotions are functional behaviors influenced by thoughts, stimuli, and other factors that induce neurophysiological changes in the human body. Understanding and classifying emotions are challenging as individuals have varying perceptions of their environments. Therefore, it is crucial that there are publicly available databases and virtual reality (VR) based environments that have been scientifically validated for assessing emotional classification. This study utilized two commercially available VR applications (Guided Meditation VR™ and Richie’s Plank Experience™) to induce acute stress and calm state among participants. Subjective and objective measures were collected to create a validated multimodal dataset and classification scheme for affective state classification. Participants’ subjective measures included the use of the Self-Assessment Manikin, emotional cards and 9 point Visual Analogue Scale for perceived stress, collected using a Virtual Reality Assessment Tool developed by our team. Participants’ objective measures included Electrocardiogram and Respiration data that were collected from 25 participants (15 M, 10 F, Mean = 22.28  4.92). The features extracted from these data included heart rate variability components and respiration rate, both of which were used to train two machine learning models. Subjective responses validated the efficacy of the VR applications in eliciting the two desired affective states; for classifying the affective states, a logistic regression (LR) and a support vector machine (SVM) with a linear kernel algorithm were developed. The LR outperformed the SVM and achieved 93.8%, 96.2%, 93.8% leave one subject out cross-validation accuracy, precision and recall, respectively. The VR assessment tool and data collected in this study are publicly available for other researchers.

Keywords: affective computing, biosignals, machine learning, stress database

Procedia PDF Downloads 142
657 Inverse Saturable Absorption in Non-linear Amplifying Loop Mirror Mode-Locked Fiber Laser

Authors: Haobin Zheng, Xiang Zhang, Yong Shen, Hongxin Zou

Abstract:

The research focuses on mode-locked fiber lasers with a non-linear amplifying loop mirror (NALM). Although these lasers have shown potential, they still have limitations in terms of low repetition rate. The self-starting of mode-locking in NALM is influenced by the cross-phase modulation (XPM) effect, which has not been thoroughly studied. The aim of this study is two-fold. First, to overcome the difficulties associated with increasing the repetition rate in mode-locked fiber lasers with NALM. Second, to analyze the influence of XPM on self-starting of mode-locking. The power distributions of two counterpropagating beams in the NALM and the differential non-linear phase shift (NPS) accumulations are calculated. The analysis is conducted from the perspective of NPS accumulation. The differential NPSs for continuous wave (CW) light and pulses in the fiber loop are compared to understand the inverse saturable absorption (ISA) mechanism during pulse formation in NALM. The study reveals a difference in differential NPSs between CW light and pulses in the fiber loop in NALM. This difference leads to an ISA mechanism, which has not been extensively studied in artificial saturable absorbers. The ISA in NALM provides an explanation for experimentally observed phenomena, such as active mode-locking initiation through tapping the fiber or fine-tuning light polarization. These findings have important implications for optimizing the design of NALM and reducing the self-starting threshold of high-repetition-rate mode-locked fiber lasers. This study contributes to the theoretical understanding of NALM mode-locked fiber lasers by exploring the ISA mechanism and its impact on self-starting of mode-locking. The research fills a gap in the existing knowledge regarding the XPM effect in NALM and its role in pulse formation. This study provides insights into the ISA mechanism in NALM mode-locked fiber lasers and its role in selfstarting of mode-locking. The findings contribute to the optimization of NALM design and the reduction of self-starting threshold, which are essential for achieving high-repetition-rate operation in fiber lasers. Further research in this area can lead to advancements in the field of mode-locked fiber lasers with NALM.

Keywords: inverse saturable absorption, NALM, mode-locking, non-linear phase shift

Procedia PDF Downloads 101
656 Bulk-Density and Lignocellulose Composition: Influence of Changing Lignocellulosic Composition on Bulk-Density during Anaerobic Digestion and Implication of Compacted Lignocellulose Bed on Mass Transfer

Authors: Aastha Paliwal, H. N. Chanakya, S. Dasappa

Abstract:

Lignocellulose, as an alternate feedstock for biogas production, has been an active area of research. However, lignocellulose poses a lot of operational difficulties- widespread variation in the structural organization of lignocellulosic matrix, amenability to degradation, low bulk density, to name a few. Amongst these, the low bulk density of the lignocellulosic feedstock is crucial to the process operation and optimization. Low bulk densities render the feedstock floating in conventional liquid/wet digesters. Low bulk densities also restrict the maximum achievable organic loading rate in the reactor, decreasing the power density of the reactor. However, during digestion, lignocellulose undergoes very high compaction (up to 26 times feeding density). This first reduces the achievable OLR (because of low feeding density) and compaction during digestion, then renders the reactor space underutilized and also imposes significant mass transfer limitations. The objective of this paper was to understand the effects of compacting lignocellulose on mass transfer and the influence of loss of different components on the bulk density and hence structural integrity of the digesting lignocellulosic feedstock. 10 different lignocellulosic feedstocks (monocots and dicots) were digested anaerobically in a fed-batch, leach bed reactor -solid-state stratified bed reactor (SSBR). Percolation rates of the recycled bio-digester liquid (BDL) were also measured during the reactor run period to understand the implication of compaction on mass transfer. After 95 ds, in a destructive sampling, lignocellulosic feedstocks digested at different SRT were investigated to quantitate the weekly changes in bulk density and lignocellulosic composition. Further, percolation rate data was also compared to bulk density data. Results from the study indicate loss of hemicellulose (r²=0.76), hot water extractives (r²=0.68), and oxalate extractives (r²=0.64) had dominant influence on changing the structural integrity of the studied lignocellulose during anaerobic digestion. Further, feeding bulk density of the lignocellulose can be maintained between 300-400kg/m³ to achieve higher OLR, and bulk density of 440-500kg/m³ incurs significant mass transfer limitation for high compacting beds of dicots.

Keywords: anaerobic digestion, bulk density, feed compaction, lignocellulose, lignocellulosic matrix, cellulose, hemicellulose, lignin, extractives, mass transfer

Procedia PDF Downloads 168
655 Backwash Optimization for Drinking Water Treatment Biological Filters

Authors: Sarra K. Ikhlef, Onita Basu

Abstract:

Natural organic matter (NOM) removal efficiency using drinking water treatment biological filters can be highly influenced by backwashing conditions. Backwashing has the ability to remove the accumulated biomass and particles in order to regenerate the biological filters' removal capacity and prevent excessive headloss buildup. A lab scale system consisting of 3 biological filters was used in this study to examine the implications of different backwash strategies on biological filtration performance. The backwash procedures were evaluated based on their impacts on dissolved organic carbon (DOC) removals, biological filters’ biomass, backwash water volume usage, and particle removal. Results showed that under nutrient limited conditions, the simultaneous use of air and water under collapse pulsing conditions lead to a DOC removal of 22% which was significantly higher (p>0.05) than the 12% removal observed under water only backwash conditions. Employing a bed expansion of 20% under nutrient supplemented conditions compared to a 30% reference bed expansion while using the same amount of water volume lead to similar DOC removals. On the other hand, utilizing a higher bed expansion (40%) lead to significantly lower DOC removals (23%). Also, a backwash strategy that reduced the backwash water volume usage by about 20% resulted in similar DOC removals observed with the reference backwash. The backwash procedures investigated in this study showed no consistent impact on biological filters' biomass concentrations as measured by the phospholipids and the adenosine tri-phosphate (ATP) methods. Moreover, none of these two analyses showed a direct correlation with DOC removal. On the other hand, dissolved oxygen (DO) uptake showed a direct correlation with DOC removals. The addition of the extended terminal subfluidization wash (ETSW) demonstrated no apparent impact on DOC removals. ETSW also successfully eliminated the filter ripening sequence (FRS). As a result, the additional water usage resulting from implementing ETSW was compensated by water savings after restart. Results from this study provide insight to researchers and water treatment utilities on how to better optimize the backwashing procedure for the goal of optimizing the overall biological filtration process.

Keywords: biological filtration, backwashing, collapse pulsing, ETSW

Procedia PDF Downloads 274
654 Modelling of Solidification in a Latent Thermal Energy Storage with a Finned Tube Bundle Heat Exchanger Unit

Authors: Remo Waser, Simon Maranda, Anastasia Stamatiou, Ludger J. Fischer, Joerg Worlitschek

Abstract:

In latent heat storage, a phase change material (PCM) is used to store thermal energy. The heat transfer rate during solidification is limited and considered as a key challenge in the development of latent heat storages. Thus, finned heat exchangers (HEX) are often utilized to increase the heat transfer rate of the storage system. In this study, a new modeling approach to calculating the heat transfer rate in latent thermal energy storages with complex HEX geometries is presented. This model allows for an optimization of the HEX design in terms of costs and thermal performance of the system. Modeling solidification processes requires the calculation of time-dependent heat conduction with moving boundaries. Commonly used computational fluid dynamic (CFD) methods enable the analysis of the heat transfer in complex HEX geometries. If applied to the entire storage, the drawback of this approach is the high computational effort due to small time steps and fine computational grids required for accurate solutions. An alternative to describe the process of solidification is the so-called temperature-based approach. In order to minimize the computational effort, a quasi-stationary assumption can be applied. This approach provides highly accurate predictions for tube heat exchangers. However, it shows unsatisfactory results for more complex geometries such as finned tube heat exchangers. The presented simulation model uses a temporal and spatial discretization of heat exchanger tube. The spatial discretization is based on the smallest possible symmetric segment of the HEX. The heat flow in each segment is calculated using finite volume method. Since the heat transfer fluid temperature can be derived using energy conservation equations, the boundary conditions at the inner tube wall is dynamically updated for each time step and segment. The model allows a prediction of the thermal performance of latent thermal energy storage systems using complex HEX geometries with considerably low computational effort.

Keywords: modelling of solidification, finned tube heat exchanger, latent thermal energy storage

Procedia PDF Downloads 268
653 Effect of Proteoliposome Concentration on Salt Rejection Rate of Polysulfone Membrane Prepared by Incorporation of Escherichia coli and Halomonas elongata Aquaporins

Authors: Aysenur Ozturk, Aysen Yildiz, Hilal Yilmaz, Pinar Ergenekon, Melek Ozkan

Abstract:

Water scarcity is one of the most important environmental problems of the World today. Desalination process is regarded as a promising solution to solve drinking water problem of the countries facing with water shortages. Reverse osmosis membranes are widely used for desalination processes. Nano structured biomimetic membrane production is one of the most challenging research subject for improving water filtration efficiency of the membranes and for reducing the cost of desalination processes. There are several researches in the literature on the development of novel biomimetic nanofiltration membranes by incorporation of aquaporin Z molecules. Aquaporins are cell membrane proteins that allow the passage of water molecules and reject all other dissolved solutes. They are present in cell membranes of most of the living organisms and provide high water passage capacity. In this study, GST (Glutathione S-transferas) tagged E. coli aquaporinZ and H. elongate aquaporin proteins, which were previously cloned and characterized, were purified from E. coli BL21 cells and used for fabrication of modified Polysulphone Membrane (PS). Aquaporins were incorporated on the surface of the membrane by using 1,2-dioleoyl-sn-glycero-3-phosphocholine (DOPC) phospolipids as carrier liposomes. Aquaporin containing proteoliposomes were immobilized on the surface of the membrane with m-phenylene-diamine (MPD) and trimesoyl chloride (TMC) rejection layer. Water flux, salt rejection and glucose rejection performances of the thin film composite membranes were tested by using Dead-End Reactor Cell. In this study, effect of proteoliposome concentration, and filtration pressure on water flux and salt rejection rate of membranes were investigated. Type of aquaporin used for membrane fabrication, flux and pressure applied for filtration were found to be important parameters affecting rejection rates. Results suggested that optimization of concentration of aquaporin carriers (proteoliposomes) on the membrane surface is necessary for fabrication of effective composite membranes used for different purposes.

Keywords: aquaporins, biomimmetic membranes, desalination, water treatment

Procedia PDF Downloads 198
652 Hybridization of Manually Extracted and Convolutional Features for Classification of Chest X-Ray of COVID-19

Authors: M. Bilal Ishfaq, Adnan N. Qureshi

Abstract:

COVID-19 is the most infectious disease these days, it was first reported in Wuhan, the capital city of Hubei in China then it spread rapidly throughout the whole world. Later on 11 March 2020, the World Health Organisation (WHO) declared it a pandemic. Since COVID-19 is highly contagious, it has affected approximately 219M people worldwide and caused 4.55M deaths. It has brought the importance of accurate diagnosis of respiratory diseases such as pneumonia and COVID-19 to the forefront. In this paper, we propose a hybrid approach for the automated detection of COVID-19 using medical imaging. We have presented the hybridization of manually extracted and convolutional features. Our approach combines Haralick texture features and convolutional features extracted from chest X-rays and CT scans. We also employ a minimum redundancy maximum relevance (MRMR) feature selection algorithm to reduce computational complexity and enhance classification performance. The proposed model is evaluated on four publicly available datasets, including Chest X-ray Pneumonia, COVID-19 Pneumonia, COVID-19 CTMaster, and VinBig data. The results demonstrate high accuracy and effectiveness, with 0.9925 on the Chest X-ray pneumonia dataset, 0.9895 on the COVID-19, Pneumonia and Normal Chest X-ray dataset, 0.9806 on the Covid CTMaster dataset, and 0.9398 on the VinBig dataset. We further evaluate the effectiveness of the proposed model using ROC curves, where the AUC for the best-performing model reaches 0.96. Our proposed model provides a promising tool for the early detection and accurate diagnosis of COVID-19, which can assist healthcare professionals in making informed treatment decisions and improving patient outcomes. The results of the proposed model are quite plausible and the system can be deployed in a clinical or research setting to assist in the diagnosis of COVID-19.

Keywords: COVID-19, feature engineering, artificial neural networks, radiology images

Procedia PDF Downloads 75
651 Modeling of Bipolar Charge Transport through Nanocomposite Films for Energy Storage

Authors: Meng H. Lean, Wei-Ping L. Chu

Abstract:

The effects of ferroelectric nanofiller size, shape, loading, and polarization, on bipolar charge injection, transport, and recombination through amorphous and semicrystalline polymers are studied. A 3D particle-in-cell model extends the classical electrical double layer representation to treat ferroelectric nanoparticles. Metal-polymer charge injection assumes Schottky emission and Fowler-Nordheim tunneling, migration through field-dependent Poole-Frenkel mobility, and recombination with Monte Carlo selection based on collision probability. A boundary integral equation method is used for solution of the Poisson equation coupled with a second-order predictor-corrector scheme for robust time integration of the equations of motion. The stability criterion of the explicit algorithm conforms to the Courant-Friedrichs-Levy limit. Trajectories for charge that make it through the film are curvilinear paths that meander through the interspaces. Results indicate that charge transport behavior depends on nanoparticle polarization with anti-parallel orientation showing the highest leakage conduction and lowest level of charge trapping in the interaction zone. Simulation prediction of a size range of 80 to 100 nm to minimize attachment and maximize conduction is validated by theory. Attached charge fractions go from 2.2% to 97% as nanofiller size is decreased from 150 nm to 60 nm. Computed conductivity of 0.4 x 1014 S/cm is in agreement with published data for plastics. Charge attachment is increased with spheroids due to the increase in surface area, and especially so for oblate spheroids showing the influence of larger cross-sections. Charge attachment to nanofillers and nanocrystallites increase with vol.% loading or degree of crystallinity, and saturate at about 40 vol.%.

Keywords: nanocomposites, nanofillers, electrical double layer, bipolar charge transport

Procedia PDF Downloads 354
650 Computational Study on Traumatic Brain Injury Using Magnetic Resonance Imaging-Based 3D Viscoelastic Model

Authors: Tanu Khanuja, Harikrishnan N. Unni

Abstract:

Head is the most vulnerable part of human body and may cause severe life threatening injuries. As the in vivo brain response cannot be recorded during injury, computational investigation of the head model could be really helpful to understand the injury mechanism. Majority of the physical damage to living tissues are caused by relative motion within the tissue due to tensile and shearing structural failures. The present Finite Element study focuses on investigating intracranial pressure and stress/strain distributions resulting from impact loads on various sites of human head. This is performed by the development of the 3D model of a human head with major segments like cerebrum, cerebellum, brain stem, CSF (cerebrospinal fluid), and skull from patient specific MRI (magnetic resonance imaging). The semi-automatic segmentation of head is performed using AMIRA software to extract finer grooves of the brain. To maintain the accuracy high number of mesh elements are required followed by high computational time. Therefore, the mesh optimization has also been performed using tetrahedral elements. In addition, model validation with experimental literature is performed as well. Hard tissues like skull is modeled as elastic whereas soft tissues like brain is modeled with viscoelastic prony series material model. This paper intends to obtain insights into the severity of brain injury by analyzing impacts on frontal, top, back, and temporal sites of the head. Yield stress (based on von Mises stress criterion for tissues) and intracranial pressure distribution due to impact on different sites (frontal, parietal, etc.) are compared and the extent of damage to cerebral tissues is discussed in detail. This paper finds that how the back impact is more injurious to overall head than the other. The present work would be helpful to understand the injury mechanism of traumatic brain injury more effectively.

Keywords: dynamic impact analysis, finite element analysis, intracranial pressure, MRI, traumatic brain injury, von Misses stress

Procedia PDF Downloads 163
649 Influence of Hydrophobic Surface on Flow Past Square Cylinder

Authors: S. Ajith Kumar, Vaisakh S. Rajan

Abstract:

In external flows, vortex shedding behind the bluff bodies causes to experience unsteady loads on a large number of engineering structures, resulting in structural failure. Vortex shedding can even turn out to be disastrous like the Tacoma Bridge failure incident. We need to have control over vortex shedding to get rid of this untoward condition by reducing the unsteady forces acting on the bluff body. In circular cylinders, hydrophobic surface in an otherwise no-slip surface is found to be delaying separation and minimizes the effects of vortex shedding drastically. Flow over square cylinder stands different from this behavior as separation can takes place from either of the two corner separation points (front or rear). An attempt is made in this study to numerically elucidate the effect of hydrophobic surface in flow over a square cylinder. A 2D numerical simulation has been done to understand the effects of the slip surface on the flow past square cylinder. The details of the numerical algorithm will be presented at the time of the conference. A non-dimensional parameter, Knudsen number is defined to quantify the slip on the cylinder surface based on Maxwell’s equation. The slip surface condition of the wall affects the vorticity distribution around the cylinder and the flow separation. In the numerical analysis, we observed that the hydrophobic surface enhances the shedding frequency and damps down the amplitude of oscillations of the square cylinder. We also found that the slip has a negative effect on aerodynamic force coefficients such as the coefficient of lift (CL), coefficient of drag (CD) etc. and hence replacing the no slip surface by a hydrophobic surface can be treated as an effective drag reduction strategy and the introduction of hydrophobic surface could be utilized for reducing the vortex induced vibrations (VIV) and is found as an effective method in controlling VIV thereby controlling the structural failures.

Keywords: drag reduction, flow past square cylinder, flow control, hydrophobic surfaces, vortex shedding

Procedia PDF Downloads 375
648 Improved Distance Estimation in Dynamic Environments through Multi-Sensor Fusion with Extended Kalman Filter

Authors: Iffat Ara Ebu, Fahmida Islam, Mohammad Abdus Shahid Rafi, Mahfuzur Rahman, Umar Iqbal, John Ball

Abstract:

The application of multi-sensor fusion for enhanced distance estimation accuracy in dynamic environments is crucial for advanced driver assistance systems (ADAS) and autonomous vehicles. Limitations of single sensors such as cameras or radar in adverse conditions motivate the use of combined camera and radar data to improve reliability, adaptability, and object recognition. A multi-sensor fusion approach using an extended Kalman filter (EKF) is proposed to combine sensor measurements with a dynamic system model, achieving robust and accurate distance estimation. The research utilizes the Mississippi State University Autonomous Vehicular Simulator (MAVS) to create a controlled environment for data collection. Data analysis is performed using MATLAB. Qualitative (visualization of fused data vs ground truth) and quantitative metrics (RMSE, MAE) are employed for performance assessment. Initial results with simulated data demonstrate accurate distance estimation compared to individual sensors. The optimal sensor measurement noise variance and plant noise variance parameters within the EKF are identified, and the algorithm is validated with real-world data from a Chevrolet Blazer. In summary, this research demonstrates that multi-sensor fusion with an EKF significantly improves distance estimation accuracy in dynamic environments. This is supported by comprehensive evaluation metrics, with validation transitioning from simulated to real-world data, paving the way for safer and more reliable autonomous vehicle control.

Keywords: sensor fusion, EKF, MATLAB, MAVS, autonomous vehicle, ADAS

Procedia PDF Downloads 45
647 Elaboration of Ceramic Metal Accident Tolerant Fuels by Additive Manufacturing

Authors: O. Fiquet, P. Lemarignier

Abstract:

Additive manufacturing may find numerous applications in the nuclear industry, for the same reason as for other industries, to enlarge design possibilities and performances and develop fabrication methods as a flexible route for future innovation. Additive Manufacturing applications in the design of structural metallic components for reactors are already developed at a high Technology Readiness Level (TRL). In the case of a Pressured Water Reactor using uranium oxide fuel pellets, which are ceramics, the transposition of already optimized Additive Manufacturing (AM) processes to UO₂ remains a challenge, and the progress remains slow because, to our best knowledge, only a few laboratories have the capability of developing processes applicable to UO₂. After the Fukushima accident, numerous research fields emerged with the study of ATF (Accident tolerant Fuel) fuel concepts, which aimed to improve fuel behaviour. One item concerns the increase of the pellet thermal performance by, for example, the addition of high thermal conductivity material into fissile UO₂. This additive phase may be metallic, and the end product will constitute a CERMET composite. Innovative designs of an internal metallic framework are proposed based on predictive calculations. However, because the well-known reference pellet manufacturing methods impose many limitations, manufacturing such a composite remains an arduous task. Therefore, the AM process appears as a means of broadening the design possibilities of CERMET manufacturing. If the external form remains a standard cylindrical fuel pellet, the internal metallic design remains to be optimized based on process capabilities. This project also considers the limitation to a maximum of 10% volume of metal, which is a constraint neutron physics considerations impose. The AM technique chosen for this development is robocasting because of its simplicity and low-cost equipment. It remains, however, a challenge to adapt a ceramic 3D printing process for the fabrication of UO₂ fuel. The investigation starts with surrogate material, and the optimization of slurry feedstock is based on alumina. The paper will present the first printing of Al2O3-Mo CERMET and the expected transition from ceramic-based alumina to UO₂ CERMET.

Keywords: nuclear, fuel, CERMET, robocasting

Procedia PDF Downloads 68
646 Hedgerow Detection and Characterization Using Very High Spatial Resolution SAR DATA

Authors: Saeid Gharechelou, Stuart Green, Fiona Cawkwell

Abstract:

Hedgerow has an important role for a wide range of ecological habitats, landscape, agriculture management, carbon sequestration, wood production. Hedgerow detection accurately using satellite imagery is a challenging problem in remote sensing techniques, because in the special approach it is very similar to line object like a road, from a spectral viewpoint, a hedge is very similar to a forest. Remote sensors with very high spatial resolution (VHR) recently enable the automatic detection of hedges by the acquisition of images with enough spectral and spatial resolution. Indeed, recently VHR remote sensing data provided the opportunity to detect the hedgerow as line feature but still remain difficulties in monitoring the characterization in landscape scale. In this research is used the TerraSAR-x Spotlight and Staring mode with 3-5 m resolution in wet and dry season in the test site of Fermoy County, Ireland to detect the hedgerow by acquisition time of 2014-2015. Both dual polarization of Spotlight data in HH/VV is using for detection of hedgerow. The varied method of SAR image technique with try and error way by integration of classification algorithm like texture analysis, support vector machine, k-means and random forest are using to detect hedgerow and its characterization. We are applying the Shannon entropy (ShE) and backscattering analysis in single and double bounce in polarimetric analysis for processing the object-oriented classification and finally extracting the hedgerow network. The result still is in progress and need to apply the other method as well to find the best method in study area. Finally, this research is under way to ahead to get the best result and here just present the preliminary work that polarimetric image of TSX potentially can detect the hedgerow.

Keywords: TerraSAR-X, hedgerow detection, high resolution SAR image, dual polarization, polarimetric analysis

Procedia PDF Downloads 230
645 Safeguarding the Construction Industry: Interrogating and Mitigating Emerging Risks from AI in Construction

Authors: Abdelrhman Elagez, Rolla Monib

Abstract:

This empirical study investigates the observed risks associated with adopting Artificial Intelligence (AI) technologies in the construction industry and proposes potential mitigation strategies. While AI has transformed several industries, the construction industry is slowly adopting advanced technologies like AI, introducing new risks that lack critical analysis in the current literature. A comprehensive literature review identified a research gap, highlighting the lack of critical analysis of risks and the need for a framework to measure and mitigate the risks of AI implementation in the construction industry. Consequently, an online survey was conducted with 24 project managers and construction professionals, possessing experience ranging from 1 to 30 years (with an average of 6.38 years), to gather industry perspectives and concerns relating to AI integration. The survey results yielded several significant findings. Firstly, respondents exhibited a moderate level of familiarity (66.67%) with AI technologies, while the industry's readiness for AI deployment and current usage rates remained low at 2.72 out of 5. Secondly, the top-ranked barriers to AI adoption were identified as lack of awareness, insufficient knowledge and skills, data quality concerns, high implementation costs, absence of prior case studies, and the uncertainty of outcomes. Thirdly, the most significant risks associated with AI use in construction were perceived to be a lack of human control (decision-making), accountability, algorithm bias, data security/privacy, and lack of legislation and regulations. Additionally, the participants acknowledged the value of factors such as education, training, organizational support, and communication in facilitating AI integration within the industry. These findings emphasize the necessity for tailored risk assessment frameworks, guidelines, and governance principles to address the identified risks and promote the responsible adoption of AI technologies in the construction sector.

Keywords: risk management, construction, artificial intelligence, technology

Procedia PDF Downloads 99
644 Tool for Maxillary Sinus Quantification in Computed Tomography Exams

Authors: Guilherme Giacomini, Ana Luiza Menegatti Pavan, Allan Felipe Fattori Alves, Marcela de Oliveira, Fernando Antonio Bacchim Neto, José Ricardo de Arruda Miranda, Seizo Yamashita, Diana Rodrigues de Pina

Abstract:

The maxillary sinus (MS), part of the paranasal sinus complex, is one of the most enigmatic structures in modern humans. The literature has suggested that MSs function as olfaction accessories, to heat or humidify inspired air, for thermoregulation, to impart resonance to the voice and others. Thus, the real function of the MS is still uncertain. Furthermore, the MS anatomy is complex and varies from person to person. Many diseases may affect the development process of sinuses. The incidence of rhinosinusitis and other pathoses in the MS is comparatively high, so, volume analysis has clinical value. Providing volume values for MS could be helpful in evaluating the presence of any abnormality and could be used for treatment planning and evaluation of the outcome. The computed tomography (CT) has allowed a more exact assessment of this structure, which enables a quantitative analysis. However, this is not always possible in the clinical routine, and if possible, it involves much effort and/or time. Therefore, it is necessary to have a convenient, robust, and practical tool correlated with the MS volume, allowing clinical applicability. Nowadays, the available methods for MS segmentation are manual or semi-automatic. Additionally, manual methods present inter and intraindividual variability. Thus, the aim of this study was to develop an automatic tool to quantity the MS volume in CT scans of paranasal sinuses. This study was developed with ethical approval from the authors’ institutions and national review panels. The research involved 30 retrospective exams of University Hospital, Botucatu Medical School, São Paulo State University, Brazil. The tool for automatic MS quantification, developed in Matlab®, uses a hybrid method, combining different image processing techniques. For MS detection, the algorithm uses a Support Vector Machine (SVM), by features such as pixel value, spatial distribution, shape and others. The detected pixels are used as seed point for a region growing (RG) segmentation. Then, morphological operators are applied to reduce false-positive pixels, improving the segmentation accuracy. These steps are applied in all slices of CT exam, obtaining the MS volume. To evaluate the accuracy of the developed tool, the automatic method was compared with manual segmentation realized by an experienced radiologist. For comparison, we used Bland-Altman statistics, linear regression, and Jaccard similarity coefficient. From the statistical analyses for the comparison between both methods, the linear regression showed a strong association and low dispersion between variables. The Bland–Altman analyses showed no significant differences between the analyzed methods. The Jaccard similarity coefficient was > 0.90 in all exams. In conclusion, the developed tool to quantify MS volume proved to be robust, fast, and efficient, when compared with manual segmentation. Furthermore, it avoids the intra and inter-observer variations caused by manual and semi-automatic methods. As future work, the tool will be applied in clinical practice. Thus, it may be useful in the diagnosis and treatment determination of MS diseases. Providing volume values for MS could be helpful in evaluating the presence of any abnormality and could be used for treatment planning and evaluation of the outcome. The computed tomography (CT) has allowed a more exact assessment of this structure which enables a quantitative analysis. However, this is not always possible in the clinical routine, and if possible, it involves much effort and/or time. Therefore, it is necessary to have a convenient, robust and practical tool correlated with the MS volume, allowing clinical applicability. Nowadays, the available methods for MS segmentation are manual or semi-automatic. Additionally, manual methods present inter and intraindividual variability. Thus, the aim of this study was to develop an automatic tool to quantity the MS volume in CT scans of paranasal sinuses. This study was developed with ethical approval from the authors’ institutions and national review panels. The research involved 30 retrospective exams of University Hospital, Botucatu Medical School, São Paulo State University, Brazil. The tool for automatic MS quantification, developed in Matlab®, uses a hybrid method, combining different image processing techniques. For MS detection, the algorithm uses a Support Vector Machine (SVM), by features such as pixel value, spatial distribution, shape and others. The detected pixels are used as seed point for a region growing (RG) segmentation. Then, morphological operators are applied to reduce false-positive pixels, improving the segmentation accuracy. These steps are applied in all slices of CT exam, obtaining the MS volume. To evaluate the accuracy of the developed tool, the automatic method was compared with manual segmentation realized by an experienced radiologist. For comparison, we used Bland-Altman statistics, linear regression and Jaccard similarity coefficient. From the statistical analyses for the comparison between both methods, the linear regression showed a strong association and low dispersion between variables. The Bland–Altman analyses showed no significant differences between the analyzed methods. The Jaccard similarity coefficient was > 0.90 in all exams. In conclusion, the developed tool to automatically quantify MS volume proved to be robust, fast and efficient, when compared with manual segmentation. Furthermore, it avoids the intra and inter-observer variations caused by manual and semi-automatic methods. As future work, the tool will be applied in clinical practice. Thus, it may be useful in the diagnosis and treatment determination of MS diseases.

Keywords: maxillary sinus, support vector machine, region growing, volume quantification

Procedia PDF Downloads 504
643 Application of a Model-Free Artificial Neural Networks Approach for Structural Health Monitoring of the Old Lidingö Bridge

Authors: Ana Neves, John Leander, Ignacio Gonzalez, Raid Karoumi

Abstract:

Systematic monitoring and inspection are needed to assess the present state of a structure and predict its future condition. If an irregularity is noticed, repair actions may take place and the adequate intervention will most probably reduce the future costs with maintenance, minimize downtime and increase safety by avoiding the failure of the structure as a whole or of one of its structural parts. For this to be possible decisions must be made at the right time, which implies using systems that can detect abnormalities in their early stage. In this sense, Structural Health Monitoring (SHM) is seen as an effective tool for improving the safety and reliability of infrastructures. This paper explores the decision-making problem in SHM regarding the maintenance of civil engineering structures. The aim is to assess the present condition of a bridge based exclusively on measurements using the suggested method in this paper, such that action is taken coherently with the information made available by the monitoring system. Artificial Neural Networks are trained and their ability to predict structural behavior is evaluated in the light of a case study where acceleration measurements are acquired from a bridge located in Stockholm, Sweden. This relatively old bridge is presently still in operation despite experiencing obvious problems already reported in previous inspections. The prediction errors provide a measure of the accuracy of the algorithm and are subjected to further investigation, which comprises concepts like clustering analysis and statistical hypothesis testing. These enable to interpret the obtained prediction errors, draw conclusions about the state of the structure and thus support decision making regarding its maintenance.

Keywords: artificial neural networks, clustering analysis, model-free damage detection, statistical hypothesis testing, structural health monitoring

Procedia PDF Downloads 209
642 Short Text Classification Using Part of Speech Feature to Analyze Students' Feedback of Assessment Components

Authors: Zainab Mutlaq Ibrahim, Mohamed Bader-El-Den, Mihaela Cocea

Abstract:

Students' textual feedback can hold unique patterns and useful information about learning process, it can hold information about advantages and disadvantages of teaching methods, assessment components, facilities, and other aspects of teaching. The results of analysing such a feedback can form a key point for institutions’ decision makers to advance and update their systems accordingly. This paper proposes a data mining framework for analysing end of unit general textual feedback using part of speech feature (PoS) with four machine learning algorithms: support vector machines, decision tree, random forest, and naive bays. The proposed framework has two tasks: first, to use the above algorithms to build an optimal model that automatically classifies the whole data set into two subsets, one subset is tailored to assessment practices (assessment related), and the other one is the non-assessment related data. Second task to use the same algorithms to build an optimal model for whole data set, and the new data subsets to automatically detect their sentiment. The significance of this paper is to compare the performance of the above four algorithms using part of speech feature to the performance of the same algorithms using n-grams feature. The paper follows Knowledge Discovery and Data Mining (KDDM) framework to construct the classification and sentiment analysis models, which is understanding the assessment domain, cleaning and pre-processing the data set, selecting and running the data mining algorithm, interpreting mined patterns, and consolidating the discovered knowledge. The results of this paper experiments show that both models which used both features performed very well regarding first task. But regarding the second task, models that used part of speech feature has underperformed in comparison with models that used unigrams and bigrams.

Keywords: assessment, part of speech, sentiment analysis, student feedback

Procedia PDF Downloads 142
641 A Good Start for Digital Transformation of the Companies: A Literature and Experience-Based Predefined Roadmap

Authors: Batuhan Kocaoglu

Abstract:

Nowadays digital transformation is a hot topic both in service and production business. For the companies who want to stay alive in the following years, they should change how they do their business. Industry leaders started to improve their ERP (Enterprise Resource Planning) like backbone technologies to digital advances such as analytics, mobility, sensor-embedded smart devices, AI (Artificial Intelligence) and more. Selecting the appropriate technology for the related business problem also is a hot topic. Besides this, to operate in the modern environment and fulfill rapidly changing customer expectations, a digital transformation of the business is required and change the way the business runs, affect how they do their business. Even the digital transformation term is trendy the literature is limited and covers just the philosophy instead of a solid implementation plan. Current studies urge firms to start their digital transformation, but few tell us how to do. The huge investments scare companies with blur definitions and concepts. The aim of this paper to solidify the steps of the digital transformation and offer a roadmap for the companies and academicians. The proposed roadmap is developed based upon insights from the literature review, semi-structured interviews, and expert views to explore and identify crucial steps. We introduced our roadmap in the form of 8 main steps: Awareness; Planning; Operations; Implementation; Go-live; Optimization; Autonomation; Business Transformation; including a total of 11 sub-steps with examples. This study also emphasizes four dimensions of the digital transformation mainly: Readiness assessment; Building organizational infrastructure; Building technical infrastructure; Maturity assessment. Finally, roadmap corresponds the steps with three main terms used in digital transformation literacy as Digitization; Digitalization; and Digital Transformation. The resulted model shows that 'business process' and 'organizational issues' should be resolved before technology decisions and 'digitization'. Companies can start their journey with the solid steps, using the proposed roadmap to increase the success of their project implementation. Our roadmap is also adaptable for relevant Industry 4.0 and enterprise application projects. This roadmap will be useful for companies to persuade their top management for investments. Our results can be used as a baseline for further researches related to readiness assessment and maturity assessment studies.

Keywords: digital transformation, digital business, ERP, roadmap

Procedia PDF Downloads 170
640 Unsupervised Classification of DNA Barcodes Species Using Multi-Library Wavelet Networks

Authors: Abdesselem Dakhli, Wajdi Bellil, Chokri Ben Amar

Abstract:

DNA Barcode, a short mitochondrial DNA fragment, made up of three subunits; a phosphate group, sugar and nucleic bases (A, T, C, and G). They provide good sources of information needed to classify living species. Such intuition has been confirmed by many experimental results. Species classification with DNA Barcode sequences has been studied by several researchers. The classification problem assigns unknown species to known ones by analyzing their Barcode. This task has to be supported with reliable methods and algorithms. To analyze species regions or entire genomes, it becomes necessary to use similarity sequence methods. A large set of sequences can be simultaneously compared using Multiple Sequence Alignment which is known to be NP-complete. To make this type of analysis feasible, heuristics, like progressive alignment, have been developed. Another tool for similarity search against a database of sequences is BLAST, which outputs shorter regions of high similarity between a query sequence and matched sequences in the database. However, all these methods are still computationally very expensive and require significant computational infrastructure. Our goal is to build predictive models that are highly accurate and interpretable. This method permits to avoid the complex problem of form and structure in different classes of organisms. On empirical data and their classification performances are compared with other methods. Our system consists of three phases. The first is called transformation, which is composed of three steps; Electron-Ion Interaction Pseudopotential (EIIP) for the codification of DNA Barcodes, Fourier Transform and Power Spectrum Signal Processing. The second is called approximation, which is empowered by the use of Multi Llibrary Wavelet Neural Networks (MLWNN).The third is called the classification of DNA Barcodes, which is realized by applying the algorithm of hierarchical classification.

Keywords: DNA barcode, electron-ion interaction pseudopotential, Multi Library Wavelet Neural Networks (MLWNN)

Procedia PDF Downloads 318
639 Structure-Guided Optimization of Sulphonamide as Gamma–Secretase Inhibitors for the Treatment of Alzheimer’s Disease

Authors: Vaishali Patil, Neeraj Masand

Abstract:

In older people, Alzheimer’s disease (AD) is turning out to be a lethal disease. According to the amyloid hypothesis, aggregation of the amyloid β–protein (Aβ), particularly its 42-residue variant (Aβ42), plays direct role in the pathogenesis of AD. Aβ is generated through sequential cleavage of amyloid precursor protein (APP) by β–secretase (BACE) and γ–secretase (GS). Thus in the treatment of AD, γ-secretase modulators (GSMs) are potential disease-modifying as they selectively lower pathogenic Aβ42 levels by shifting the enzyme cleavage sites without inhibiting γ–secretase activity. This possibly avoids known adverse effects observed with complete inhibition of the enzyme complex. Virtual screening, via drug-like ADMET filter, QSAR and molecular docking analyses, has been utilized to identify novel γ–secretase modulators with sulphonamide nucleus. Based on QSAR analyses and docking score, some novel analogs have been synthesized. The results obtained by in silico studies have been validated by performing in vivo analysis. In the first step, behavioral assessment has been carried out using Scopolamine induced amnesia methodology. Later the same series has been evaluated for neuroprotective potential against the oxidative stress induced by Scopolamine. Biochemical estimation was performed to evaluate the changes in biochemical markers of Alzheimer’s disease such as lipid peroxidation (LPO), Glutathione reductase (GSH), and Catalase. The Scopolamine induced amnesia model has shown increased Acetylcholinesterase (AChE) levels and the inhibitory effect of test compounds in the brain AChE levels have been evaluated. In all the studies Donapezil (Dose: 50µg/kg) has been used as reference drug. The reduced AChE activity is shown by compounds 3f, 3c, and 3e. In the later stage, the most potent compounds have been evaluated for Aβ42 inhibitory profile. It can be hypothesized that this series of alkyl-aryl sulphonamides exhibit anti-AD activity by inhibition of Acetylcholinesterase (AChE) enzyme as well as inhibition of plaque formation on prolong dosage along with neuroprotection from oxidative stress.

Keywords: gamma-secretase inhibitors, Alzzheimer's disease, sulphonamides, QSAR

Procedia PDF Downloads 255
638 Control Algorithm Design of Single-Phase Inverter For ZnO Breakdown Characteristics Tests

Authors: Kashif Habib, Zeeshan Ayyub

Abstract:

ZnO voltage dependent resistor was widely used as components of the electrical system for over-voltage protection. It has a wide application prospect in superconducting energy-removal, generator de-excitation, overvoltage protection of electrical & electronics equipment. At present, the research for the application of ZnO voltage dependent resistor stop, it uses just in the field of its nonlinear voltage current characteristic and overvoltage protection areas. There is no further study over the over-voltage breakdown characteristics, such as the combustion phenomena and the measure of the voltage/current when it breakdown, and the affect to its surrounding equipment. It is also a blind spot in its application. So, when we do the feature test of ZnO voltage dependent resistor, we need to design a reasonable test power supply, making the terminal voltage keep for sine wave, simulating the real use of PF voltage in power supply conditions. We put forward the solutions of using inverter to generate a controllable power. The paper mainly focuses on the breakdown characteristic test power supply of nonlinear ZnO voltage dependent resistor. According to the current mature switching power supply technology, we proposed power control system using the inverter as the core. The power mainly realize the sin-voltage output on the condition of three-phase PF-AC input, and 3 control modes (RMS, Peak, Average) of the current output. We choose TMS320F2812M as the control part of the hardware platform. It is used to convert the power from three-phase to a controlled single-phase sin-voltage through a rectifier, filter, and inverter. Design controller produce SPWM, to get the controlled voltage source via appropriate multi-loop control strategy, while execute data acquisition and display, system protection, start logic control, etc. The TMS320F2812M is able to complete the multi-loop control quickly and can be a good completion of the inverter output control.

Keywords: ZnO, multi-loop control, SPWM, non-linear load

Procedia PDF Downloads 325
637 Development of a Coupled Thermal-Mechanical-Biological Model to Simulate Impacts of Temperature on Waste Stabilization at a Landfill in Quebec, Canada

Authors: Simran Kaur, Paul J. Van Geel

Abstract:

A coupled Thermal-Mechanical-Biological (TMB) model was developed for the analysis of impacts of temperatures on waste stabilization at a Municipal Solid Waste (MSW) landfill in Quebec, Canada using COMSOL Multiphysics, a finite element-based software. For waste placed in landfills in Northern climates during winter months, it can take months or even years before the waste approaches ideal temperatures for biodegradation to occur. Therefore, the proposed model links biodegradation induced strain in MSW to waste temperatures and corresponding heat generation rates as a result of anaerobic degradation. This provides a link between the thermal-biological and mechanical behavior of MSW. The thermal properties of MSW are further linked to density which is tracked and updated in the mechanical component of the model, providing a mechanical-thermal link. The settlement of MSW is modelled based on the concept of viscoelasticity. The specific viscoelastic model used is a single Kelvin – Voight viscoelastic body in which the finite element response is controlled by the elastic material parameters – Young’s Modulus and Poisson’s ratio. The numerical model was validated with 10 years of temperature and settlement data collected from a landfill in Ste. Sophie, Quebec. The coupled TMB modelling framework, which simulates placement of waste lifts as they are placed progressively in the landfill, allows for optimization of several thermal and mechanical parameters throughout the depth of the waste profile and helps in better understanding of temperature dependence of MSW stabilization. The model is able to illustrate how waste placed in the winter months can delay biodegradation-induced settlement and generation of landfill gas. A delay in waste stabilization will impact the utilization of the approved airspace prior to the placement of a final cover and impact post-closure maintenance. The model provides a valuable tool to assess different waste placement strategies in order to increase airspace utilization within landfills operating under different climates, in addition to understanding conditions for increased gas generation for recovery as a green and renewable energy source.

Keywords: coupled model, finite element modeling, landfill, municipal solid waste, waste stabilization

Procedia PDF Downloads 132
636 Object Detection in Digital Images under Non-Standardized Conditions Using Illumination and Shadow Filtering

Authors: Waqqas-ur-Rehman Butt, Martin Servin, Marion Pause

Abstract:

In recent years, object detection has gained much attention and very encouraging research area in the field of computer vision. The robust object boundaries detection in an image is demanded in numerous applications of human computer interaction and automated surveillance systems. Many methods and approaches have been developed for automatic object detection in various fields, such as automotive, quality control management and environmental services. Inappropriately, to the best of our knowledge, object detection under illumination with shadow consideration has not been well solved yet. Furthermore, this problem is also one of the major hurdles to keeping an object detection method from the practical applications. This paper presents an approach to automatic object detection in images under non-standardized environmental conditions. A key challenge is how to detect the object, particularly under uneven illumination conditions. Image capturing conditions the algorithms need to consider a variety of possible environmental factors as the colour information, lightening and shadows varies from image to image. Existing methods mostly failed to produce the appropriate result due to variation in colour information, lightening effects, threshold specifications, histogram dependencies and colour ranges. To overcome these limitations we propose an object detection algorithm, with pre-processing methods, to reduce the interference caused by shadow and illumination effects without fixed parameters. We use the Y CrCb colour model without any specific colour ranges and predefined threshold values. The segmented object regions are further classified using morphological operations (Erosion and Dilation) and contours. Proposed approach applied on a large image data set acquired under various environmental conditions for wood stack detection. Experiments show the promising result of the proposed approach in comparison with existing methods.

Keywords: image processing, illumination equalization, shadow filtering, object detection

Procedia PDF Downloads 216
635 The Effect of Different Concentrations of Extracting Solvent on the Polyphenolic Content and Antioxidant Activity of Gynura procumbens Leaves

Authors: Kam Wen Hang, Tan Kee Teng, Huang Poh Ching, Chia Kai Xiang, H. V. Annegowda, H. S. Naveen Kumar

Abstract:

Gynura procumbens (G. procumbens) leaves, commonly known as ‘sambung nyawa’ in Malaysia is a well-known medicinal plant commonly used as folk medicines in controlling blood glucose, cholesterol level as well as treating cancer. These medicinal properties were believed to be related to the polyphenolic content present in G. procumbens extract, therefore optimization of its extraction process is vital to obtain highest possible antioxidant activities. The current study was conducted to investigate the effect of different concentrations of extracting solvent (ethanol) on the amount of polyphenolic content and antioxidant activities of G. procumbens leaf extract. The concentrations of ethanol used were 30-70%, with the temperature and time kept constant at 50°C and 30 minutes, respectively using ultrasound-assisted extraction. The polyphenolic content of these extracts were quantified by Folin-Ciocalteu colorimetric method and results were expressed as milligram gallic acid equivalent (mg GAE)/g. Phosphomolybdenum method and 1, 1-diphenyl-2-picrylhydrazyl (DPPH) radical scavenging assays were used to investigate the antioxidant properties of the extract and the results were expressed as milligram ascorbic acid equivalent (mg AAE)/g and effective concentration (EC50) respectively. Among the three different (30%, 50% and 70%) concentrations of ethanol studied, the 50% ethanolic extract showed total phenolic content of 31.565 ± 0.344 mg GAE/g and total antioxidant activity of 78.839 ± 0.199 mg AAE/g while 30% ethanolic extract showed 29.214 ± 0.645 mg GAE/g and 70.701 ± 1.394 mg AAE/g, respectively. With respect to DPPH radical scavenging assay, 50% ethanolic extract had exhibited slightly lower EC50 (314.3 ± 4.0 μg/ml) values compared to 30% ethanol extract (340.4 ± 5.3 μg/ml). Out of all the tested extracts, 70% ethanolic extract exhibited significantly (p< 0.05) highest total phenolic content (38.000 ± 1.009 mg GAE/g), total antioxidant capacity (95.874 ± 2.422 mg AAE/g) and demonstrated the lowest EC50 in DPPH assay (244.2 ± 5.9 μg/ml). An excellent correlations were drawn between total phenolic content, total antioxidant capacity and DPPH radical scavenging activity (R2 = 0.949 and R2 = 0.978, respectively). It was concluded from this study that, 70% ethanol should be used as the optimal polarity solvent to obtain G. procumbens leaf extract with maximum polyphenolic content with antioxidant properties.

Keywords: antioxidant activity, DPPH assay, Gynura procumbens, phenolic compounds

Procedia PDF Downloads 412