Search results for: STS benchmark dataset
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1465

Search results for: STS benchmark dataset

565 Imp_hist-Si: Improved Hybrid Image Segmentation Technique for Satellite Imagery to Decrease the Segmentation Error Rate

Authors: Neetu Manocha

Abstract:

Image segmentation is a technique where a picture is parted into distinct parts having similar features which have a place with similar items. Various segmentation strategies have been proposed as of late by prominent analysts. But, after ultimate thorough research, the novelists have analyzed that generally, the old methods do not decrease the segmentation error rate. Then author finds the technique HIST-SI to decrease the segmentation error rates. In this technique, cluster-based and threshold-based segmentation techniques are merged together. After then, to improve the result of HIST-SI, the authors added the method of filtering and linking in this technique named Imp_HIST-SI to decrease the segmentation error rates. The goal of this research is to find a new technique to decrease the segmentation error rates and produce much better results than the HIST-SI technique. For testing the proposed technique, a dataset of Bhuvan – a National Geoportal developed and hosted by ISRO (Indian Space Research Organisation) is used. Experiments are conducted using Scikit-image & OpenCV tools of Python, and performance is evaluated and compared over various existing image segmentation techniques for several matrices, i.e., Mean Square Error (MSE) and Peak Signal Noise Ratio (PSNR).

Keywords: satellite image, image segmentation, edge detection, error rate, MSE, PSNR, HIST-SI, linking, filtering, imp_HIST-SI

Procedia PDF Downloads 120
564 Contextual Paper on Green Finance: Analysis of the Green Bonds Market

Authors: Dina H. Gabr, Mona A. El Bannan

Abstract:

With growing worldwide concern for global warming, green finance has become the fuel that pushes the world to act in combating and mitigating climate change. Coupled with adopting the Paris Agreement and the United Nations Sustainable Development Goals, Green finance became a vital tool in creating a pathway to sustainable development, as it connects the financial world with environmental and societal benefits. This paper provides a comprehensive review of the concepts and definitions of green finance and the importance of 'green' impact investments today. The core challenge in combating climate change is reducing and controlling Greenhouse gas emissions; therefore, this study explores the solutions green finance provides putting emphasis on the use of renewable energy, which is necessary for enhancing the transition to the green economy. With increasing attention to the concept of green finance, multiple forms of green investments and financial tools have come to fruition; the most prominent are green bonds. The rise of green bonds, a debt market to finance climate solutions, provide a promising mechanism for sustainable finance. Following the review, this paper compiles a comprehensive green bond dataset, presenting a statistical study of the evolution of the green bonds market from its first appearance in 2006 until 2021.

Keywords: climate change, GHG emissions, green bonds, green finance, sustainable finance

Procedia PDF Downloads 100
563 Nowcasting Indonesian Economy

Authors: Ferry Kurniawan

Abstract:

In this paper, we nowcast quarterly output growth in Indonesia by exploiting higher frequency data (monthly indicators) using a mixed-frequency factor model and exploiting both quarterly and monthly data. Nowcasting quarterly GDP in Indonesia is particularly relevant for the central bank of Indonesia which set the policy rate in the monthly Board of Governors Meeting; whereby one of the important step is the assessment of the current state of the economy. Thus, having an accurate and up-to-date quarterly GDP nowcast every time new monthly information becomes available would clearly be of interest for central bank of Indonesia, for example, as the initial assessment of the current state of the economy -including nowcast- will be used as input for longer term forecast. We consider a small scale mixed-frequency factor model to produce nowcasts. In particular, we specify variables as year-on-year growth rates thus the relation between quarterly and monthly data is expressed in year-on-year growth rates. To assess the performance of the model, we compare the nowcasts with two other approaches: autoregressive model –which is often difficult when forecasting output growth- and Mixed Data Sampling (MIDAS) regression. In particular, both mixed frequency factor model and MIDAS nowcasts are produced by exploiting the same set of monthly indicators. Hence, we compare the nowcasts performance of the two approaches directly. To preview the results, we find that by exploiting monthly indicators using mixed-frequency factor model and MIDAS regression we improve the nowcast accuracy over a benchmark simple autoregressive model that uses only quarterly frequency data. However, it is not clear whether the MIDAS or mixed-frequency factor model is better. Neither set of nowcasts encompasses the other; suggesting that both nowcasts are valuable in nowcasting GDP but neither is sufficient. By combining the two individual nowcasts, we find that the nowcast combination not only increases the accuracy - relative to individual nowcasts- but also lowers the risk of the worst performance of the individual nowcasts.

Keywords: nowcasting, mixed-frequency data, factor model, nowcasts combination

Procedia PDF Downloads 314
562 Analysing Techniques for Fusing Multimodal Data in Predictive Scenarios Using Convolutional Neural Networks

Authors: Philipp Ruf, Massiwa Chabbi, Christoph Reich, Djaffar Ould-Abdeslam

Abstract:

In recent years, convolutional neural networks (CNN) have demonstrated high performance in image analysis, but oftentimes, there is only structured data available regarding a specific problem. By interpreting structured data as images, CNNs can effectively learn and extract valuable insights from tabular data, leading to improved predictive accuracy and uncovering hidden patterns that may not be apparent in traditional structured data analysis. In applying a single neural network for analyzing multimodal data, e.g., both structured and unstructured information, significant advantages in terms of time complexity and energy efficiency can be achieved. Converting structured data into images and merging them with existing visual material offers a promising solution for applying CNN in multimodal datasets, as they often occur in a medical context. By employing suitable preprocessing techniques, structured data is transformed into image representations, where the respective features are expressed as different formations of colors and shapes. In an additional step, these representations are fused with existing images to incorporate both types of information. This final image is finally analyzed using a CNN.

Keywords: CNN, image processing, tabular data, mixed dataset, data transformation, multimodal fusion

Procedia PDF Downloads 96
561 Variational Explanation Generator: Generating Explanation for Natural Language Inference Using Variational Auto-Encoder

Authors: Zhen Cheng, Xinyu Dai, Shujian Huang, Jiajun Chen

Abstract:

Recently, explanatory natural language inference has attracted much attention for the interpretability of logic relationship prediction, which is also known as explanation generation for Natural Language Inference (NLI). Existing explanation generators based on discriminative Encoder-Decoder architecture have achieved noticeable results. However, we find that these discriminative generators usually generate explanations with correct evidence but incorrect logic semantic. It is due to that logic information is implicitly encoded in the premise-hypothesis pairs and difficult to model. Actually, logic information identically exists between premise-hypothesis pair and explanation. And it is easy to extract logic information that is explicitly contained in the target explanation. Hence we assume that there exists a latent space of logic information while generating explanations. Specifically, we propose a generative model called Variational Explanation Generator (VariationalEG) with a latent variable to model this space. Training with the guide of explicit logic information in target explanations, latent variable in VariationalEG could capture the implicit logic information in premise-hypothesis pairs effectively. Additionally, to tackle the problem of posterior collapse while training VariaztionalEG, we propose a simple yet effective approach called Logic Supervision on the latent variable to force it to encode logic information. Experiments on explanation generation benchmark—explanation-Stanford Natural Language Inference (e-SNLI) demonstrate that the proposed VariationalEG achieves significant improvement compared to previous studies and yields a state-of-the-art result. Furthermore, we perform the analysis of generated explanations to demonstrate the effect of the latent variable.

Keywords: natural language inference, explanation generation, variational auto-encoder, generative model

Procedia PDF Downloads 128
560 Identity Management in Virtual Worlds Based on Biometrics Watermarking

Authors: S. Bader, N. Essoukri Ben Amara

Abstract:

With the technological development and rise of virtual worlds, these spaces are becoming more and more attractive for cybercriminals, hidden behind avatars and fictitious identities. Since access to these spaces is not restricted or controlled, some impostors take advantage of gaining unauthorized access and practicing cyber criminality. This paper proposes an identity management approach for securing access to virtual worlds. The major purpose of the suggested solution is to install a strong security mechanism to protect virtual identities represented by avatars. Thus, only legitimate users, through their corresponding avatars, are allowed to access the platform resources. Access is controlled by integrating an authentication process based on biometrics. In the request process for registration, a user fingerprint is enrolled and then encrypted into a watermark utilizing a cancelable and non-invertible algorithm for its protection. After a user personalizes their representative character, the biometric mark is embedded into the avatar through a watermarking procedure. The authenticity of the avatar identity is verified when it requests authorization for access. We have evaluated the proposed approach on a dataset of avatars from various virtual worlds, and we have registered promising performance results in terms of authentication accuracy, acceptation and rejection rates.

Keywords: identity management, security, biometrics authentication and authorization, avatar, virtual world

Procedia PDF Downloads 248
559 Medical Image Augmentation Using Spatial Transformations for Convolutional Neural Network

Authors: Trupti Chavan, Ramachandra Guda, Kameshwar Rao

Abstract:

The lack of data is a pain problem in medical image analysis using a convolutional neural network (CNN). This work uses various spatial transformation techniques to address the medical image augmentation issue for knee detection and localization using an enhanced single shot detector (SSD) network. The spatial transforms like a negative, histogram equalization, power law, sharpening, averaging, gaussian blurring, etc. help to generate more samples, serve as pre-processing methods, and highlight the features of interest. The experimentation is done on the OpenKnee dataset which is a collection of knee images from the openly available online sources. The CNN called enhanced single shot detector (SSD) is utilized for the detection and localization of the knee joint from a given X-ray image. It is an enhanced version of the famous SSD network and is modified in such a way that it will reduce the number of prediction boxes at the output side. It consists of a classification network (VGGNET) and an auxiliary detection network. The performance is measured in mean average precision (mAP), and 99.96% mAP is achieved using the proposed enhanced SSD with spatial transformations. It is also seen that the localization boundary is comparatively more refined and closer to the ground truth in spatial augmentation and gives better detection and localization of knee joints.

Keywords: data augmentation, enhanced SSD, knee detection and localization, medical image analysis, openKnee, Spatial transformations

Procedia PDF Downloads 137
558 A Robust Visual Simultaneous Localization and Mapping for Indoor Dynamic Environment

Authors: Xiang Zhang, Daohong Yang, Ziyuan Wu, Lei Li, Wanting Zhou

Abstract:

Visual Simultaneous Localization and Mapping (VSLAM) uses cameras to collect information in unknown environments to realize simultaneous localization and environment map construction, which has a wide range of applications in autonomous driving, virtual reality and other related fields. At present, the related research achievements about VSLAM can maintain high accuracy in static environment. But in dynamic environment, due to the presence of moving objects in the scene, the movement of these objects will reduce the stability of VSLAM system, resulting in inaccurate localization and mapping, or even failure. In this paper, a robust VSLAM method was proposed to effectively deal with the problem in dynamic environment. We proposed a dynamic region removal scheme based on semantic segmentation neural networks and geometric constraints. Firstly, semantic extraction neural network is used to extract prior active motion region, prior static region and prior passive motion region in the environment. Then, the light weight frame tracking module initializes the transform pose between the previous frame and the current frame on the prior static region. A motion consistency detection module based on multi-view geometry and scene flow is used to divide the environment into static region and dynamic region. Thus, the dynamic object region was successfully eliminated. Finally, only the static region is used for tracking thread. Our research is based on the ORBSLAM3 system, which is one of the most effective VSLAM systems available. We evaluated our method on the TUM RGB-D benchmark and the results demonstrate that the proposed VSLAM method improves the accuracy of the original ORBSLAM3 by 70%˜98.5% under high dynamic environment.

Keywords: dynamic scene, dynamic visual SLAM, semantic segmentation, scene flow, VSLAM

Procedia PDF Downloads 90
557 RGB Color Based Real Time Traffic Sign Detection and Feature Extraction System

Authors: Kay Thinzar Phu, Lwin Lwin Oo

Abstract:

In an intelligent transport system and advanced driver assistance system, the developing of real-time traffic sign detection and recognition (TSDR) system plays an important part in recent research field. There are many challenges for developing real-time TSDR system due to motion artifacts, variable lighting and weather conditions and situations of traffic signs. Researchers have already proposed various methods to minimize the challenges problem. The aim of the proposed research is to develop an efficient and effective TSDR in real time. This system proposes an adaptive thresholding method based on RGB color for traffic signs detection and new features for traffic signs recognition. In this system, the RGB color thresholding is used to detect the blue and yellow color traffic signs regions. The system performs the shape identify to decide whether the output candidate region is traffic sign or not. Lastly, new features such as termination points, bifurcation points, and 90’ angles are extracted from validated image. This system uses Myanmar Traffic Sign dataset.

Keywords: adaptive thresholding based on RGB color, blue color detection, feature extraction, yellow color detection

Procedia PDF Downloads 284
556 Sea-Land Segmentation Method Based on the Transformer with Enhanced Edge Supervision

Authors: Lianzhong Zhang, Chao Huang

Abstract:

Sea-land segmentation is a basic step in many tasks such as sea surface monitoring and ship detection. The existing sea-land segmentation algorithms have poor segmentation accuracy, and the parameter adjustments are cumbersome and difficult to meet actual needs. Also, the current sea-land segmentation adopts traditional deep learning models that use Convolutional Neural Networks (CNN). At present, the transformer architecture has achieved great success in the field of natural images, but its application in the field of radar images is less studied. Therefore, this paper proposes a sea-land segmentation method based on the transformer architecture to strengthen edge supervision. It uses a self-attention mechanism with a gating strategy to better learn relative position bias. Meanwhile, an additional edge supervision branch is introduced. The decoder stage allows the feature information of the two branches to interact, thereby improving the edge precision of the sea-land segmentation. Based on the Gaofen-3 satellite image dataset, the experimental results show that the method proposed in this paper can effectively improve the accuracy of sea-land segmentation, especially the accuracy of sea-land edges. The mean IoU (Intersection over Union), edge precision, overall precision, and F1 scores respectively reach 96.36%, 84.54%, 99.74%, and 98.05%, which are superior to those of the mainstream segmentation models and have high practical application values.

Keywords: SAR, sea-land segmentation, deep learning, transformer

Procedia PDF Downloads 146
555 Human Action Recognition Using Variational Bayesian HMM with Dirichlet Process Mixture of Gaussian Wishart Emission Model

Authors: Wanhyun Cho, Soonja Kang, Sangkyoon Kim, Soonyoung Park

Abstract:

In this paper, we present the human action recognition method using the variational Bayesian HMM with the Dirichlet process mixture (DPM) of the Gaussian-Wishart emission model (GWEM). First, we define the Bayesian HMM based on the Dirichlet process, which allows an infinite number of Gaussian-Wishart components to support continuous emission observations. Second, we have considered an efficient variational Bayesian inference method that can be applied to drive the posterior distribution of hidden variables and model parameters for the proposed model based on training data. And then we have derived the predictive distribution that may be used to classify new action. Third, the paper proposes a process of extracting appropriate spatial-temporal feature vectors that can be used to recognize a wide range of human behaviors from input video image. Finally, we have conducted experiments that can evaluate the performance of the proposed method. The experimental results show that the method presented is more efficient with human action recognition than existing methods.

Keywords: human action recognition, Bayesian HMM, Dirichlet process mixture model, Gaussian-Wishart emission model, Variational Bayesian inference, prior distribution and approximate posterior distribution, KTH dataset

Procedia PDF Downloads 332
554 Assisted Prediction of Hypertension Based on Heart Rate Variability and Improved Residual Networks

Authors: Yong Zhao, Jian He, Cheng Zhang

Abstract:

Cardiovascular diseases caused by hypertension are extremely threatening to human health, and early diagnosis of hypertension can save a large number of lives. Traditional hypertension detection methods require special equipment and are difficult to detect continuous blood pressure changes. In this regard, this paper first analyzes the principle of heart rate variability (HRV) and introduces sliding window and power spectral density (PSD) to analyze the time domain features and frequency domain features of HRV, and secondly, designs an HRV-based hypertension prediction network by combining Resnet, attention mechanism, and multilayer perceptron, which extracts the frequency domain through the improved ResNet18 features through a modified ResNet18, its fusion with time-domain features through an attention mechanism, and the auxiliary prediction of hypertension through a multilayer perceptron. Finally, the network was trained and tested using the publicly available SHAREE dataset on PhysioNet, and the test results showed that this network achieved 92.06% prediction accuracy for hypertension and outperformed K Near Neighbor(KNN), Bayes, Logistic, and traditional Convolutional Neural Network(CNN) models in prediction performance.

Keywords: feature extraction, heart rate variability, hypertension, residual networks

Procedia PDF Downloads 75
553 Clique and Clan Analysis of Patient-Sharing Physician Collaborations

Authors: Shahadat Uddin, Md Ekramul Hossain, Arif Khan

Abstract:

The collaboration among physicians during episodes of care for a hospitalised patient has a significant contribution towards effective health outcome. This research aims at improving this health outcome by analysing the attributes of patient-sharing physician collaboration network (PCN) on hospital data. To accomplish this goal, we present a research framework that explores the impact of several types of attributes (such as clique and clan) of PCN on hospitalisation cost and hospital length of stay. We use electronic health insurance claim dataset to construct and explore PCNs. Each PCN is categorised as ‘low’ and ‘high’ in terms of hospitalisation cost and length of stay. The results from the proposed model show that the clique and clan of PCNs affect the hospitalisation cost and length of stay. The clique and clan of PCNs show the difference between ‘low’ and ‘high’ PCNs in terms of hospitalisation cost and length of stay. The findings and insights from this research can potentially help the healthcare stakeholders to better formulate the policy in order to improve quality of care while reducing cost.

Keywords: clique, clan, electronic health records, physician collaboration

Procedia PDF Downloads 127
552 Developing a Secure Iris Recognition System by Using Advance Convolutional Neural Network

Authors: Kamyar Fakhr, Roozbeh Salmani

Abstract:

Alphonse Bertillon developed the first biometric security system in the 1800s. Today, many governments and giant companies are considering or have procured biometrically enabled security schemes. Iris is a kaleidoscope of patterns and colors. Each individual holds a set of irises more unique than their thumbprint. Every single day, giant companies like Google and Apple are experimenting with reliable biometric systems. Now, after almost 200 years of improvements, face ID does not work with masks, it gives access to fake 3D images, and there is no global usage of biometric recognition systems as national identity (ID) card. The goal of this paper is to demonstrate the advantages of iris recognition overall biometric recognition systems. It make two extensions: first, we illustrate how a very large amount of internet fraud and cyber abuse is happening due to bugs in face recognition systems and in a very large dataset of 3.4M people; second, we discuss how establishing a secure global network of iris recognition devices connected to authoritative convolutional neural networks could be the safest solution to this dilemma. Another aim of this study is to provide a system that will prevent system infiltration caused by cyber-attacks and will block all wireframes to the data until the main user ceases the procedure.

Keywords: biometric system, convolutional neural network, cyber-attack, secure

Procedia PDF Downloads 193
551 Maturity Transformation Risk Factors in Islamic Banking: An Implication of Basel III Liquidity Regulations

Authors: Haroon Mahmood, Christopher Gan, Cuong Nguyen

Abstract:

Maturity transformation risk is highlighted as one of the major causes of recent global financial crisis. Basel III has proposed new liquidity regulations for transformation function of banks and hence to monitor this risk. Specifically, net stable funding ratio (NSFR) is introduced to enhance medium- and long-term resilience against liquidity shocks. Islamic banking is widely accepted in many parts of the world and contributes to a significant portion of the financial sector in many countries. Using a dataset of 68 fully fledged Islamic banks from 11 different countries, over a period from 2005 – 2014, this study has attempted to analyze various factors that may significantly affect the maturity transformation risk in these banks. We utilize 2-step system GMM estimation technique on unbalanced panel and find bank capital, credit risk, financing, size and market power are most significant among the bank specific factors. Also, gross domestic product and inflation are the significant macro-economic factors influencing this risk. However, bank profitability, asset efficiency, and income diversity are found insignificant in determining the maturity transformation risk in Islamic banking model.

Keywords: Basel III, Islamic banking, maturity transformation risk, net stable funding ratio

Procedia PDF Downloads 396
550 Using the Ecological Analysis Method to Justify the Environmental Feasibility of Biohydrogen Production from Cassava Wastewater Biogas

Authors: Jonni Guiller Madeira, Angel Sanchez Delgado, Ronney Mancebo Boloy

Abstract:

The use bioenergy, in recent years, has become a good alternative to reduce the emission of polluting gases. Several Brazilian and foreign companies are doing studies related to waste management as an essential tool in the search for energy efficiency, taking into consideration, also, the ecological aspect. Brazil is one of the largest cassava producers in the world; the cassava sub-products are the food base of millions of Brazilians. The repertoire of results about the ecological impact of the production, by steam reforming, of biohydrogen from cassava wastewater biogas is very limited because, in general, this commodity is more common in underdeveloped countries. This hydrogen, produced from cassava wastewater, appears as an alternative fuel to fossil fuels since this is a low-cost carbon source. This paper evaluates the environmental impact of biohydrogen production, by steam reforming, from cassava wastewater biogas. The ecological efficiency methodology developed by Cardu and Baica was used as a benchmark in this study. The methodology mainly assesses the emissions of equivalent carbon dioxide (CO₂, SOₓ, CH₄ and particulate matter). As a result, some environmental parameters, such as equivalent carbon dioxide emissions, pollutant indicator, and ecological efficiency are evaluated due to the fact that they are important to energy production. The average values of the environmental parameters among different biogas compositions (different concentrations of methane) were calculated, the average pollution indicator was 10.11 kgCO₂e/kgH₂ with an average ecological efficiency of 93.37%. As a conclusion, bioenergy production using biohydrogen from cassava wastewater treatment plant is a good option from the environmental feasibility point of view. This fact can be justified by the determination of environmental parameters and comparison of the environmental parameters of hydrogen production via steam reforming from different types of fuels.

Keywords: biohydrogen, ecological efficiency, cassava, pollution indicator

Procedia PDF Downloads 180
549 A Monolithic Arbitrary Lagrangian-Eulerian Finite Element Strategy for Partly Submerged Solid in Incompressible Fluid with Mortar Method for Modeling the Contact Surface

Authors: Suman Dutta, Manish Agrawal, C. S. Jog

Abstract:

Accurate computation of hydrodynamic forces on floating structures and their deformation finds application in the ocean and naval engineering and wave energy harvesting. This manuscript presents a monolithic, finite element strategy for fluid-structure interaction involving hyper-elastic solids partly submerged in an incompressible fluid. A velocity-based Arbitrary Lagrangian-Eulerian (ALE) formulation has been used for the fluid and a displacement-based Lagrangian approach has been used for the solid. The flexibility of the ALE technique permits us to treat the free surface of the fluid as a Lagrangian entity. At the interface, the continuity of displacement, velocity and traction are enforced using the mortar method. In the mortar method, the constraints are enforced in a weak sense using the Lagrange multiplier method. In the literature, the mortar method has been shown to be robust in solving various contact mechanics problems. The time-stepping strategy used in this work reduces to the generalized trapezoidal rule in the Eulerian setting. In the Lagrangian limit, in the absence of external load, the algorithm conserves the linear and angular momentum and the total energy of the system. The use of monolithic coupling with an energy-conserving time-stepping strategy gives an unconditionally stable algorithm and allows the user to take large time steps. All the governing equations and boundary conditions have been mapped to the reference configuration. The use of the exact tangent stiffness matrix ensures that the algorithm converges quadratically within each time step. The robustness and good performance of the proposed method are demonstrated by solving benchmark problems from the literature.

Keywords: ALE, floating body, fluid-structure interaction, monolithic, mortar method

Procedia PDF Downloads 263
548 Size Effects on Structural Performance of Concrete Gravity Dams

Authors: Mehmet Akköse

Abstract:

Concern about seismic safety of concrete dams have been growing around the world, partly because the population at risk in locations downstream of major dams continues to expand and also because it is increasingly evident that the seismic design concepts in use at the time most existing dams were built were inadequate. Most of the investigations in the past have been conducted on large dams, typically above 100m high. A large number of concrete dams in our country and in other parts of the world are less than 50m high. Most of these dams were usually designed using pseudo-static methods, ignoring the dynamic characteristics of the structure as well as the characteristics of the ground motion. Therefore, it is important to carry out investigations on seismic behavior this category of dam in order to assess and evaluate the safety of existing dams and improve the knowledge for different high dams to be constructed in the future. In this study, size effects on structural performance of concrete gravity dams subjected to near and far-fault ground motions are investigated including dam-water-foundation interaction. For this purpose, a benchmark problem proposed by ICOLD (International Committee on Large Dams) is chosen as a numerical application. Structural performance of the dam having five different heights is evaluated according to damage criterions in USACE (U.S. Army Corps of Engineers). It is decided according to their structural performance if non-linear analysis of the dams requires or not. The linear elastic dynamic analyses of the dams to near and far-fault ground motions are performed using the step-by-step integration technique. The integration time step is 0.0025 sec. The Rayleigh damping constants are calculated assuming 5% damping ratio. The program NONSAP modified for fluid-structure systems with the Lagrangian fluid finite element is employed in the response calculations.

Keywords: concrete gravity dams, Lagrangian approach, near and far-fault ground motion, USACE damage criterions

Procedia PDF Downloads 255
547 BTEX Removal from Water: A Comparative Analysis of Efficiency of Low Cost Adsorbents and Granular Activated Carbon

Authors: Juliet Okoli

Abstract:

The removal of BTEX (Benzene, toluene, Ethylbenzene and p-Xylene) from water by orange peel and eggshell compared to GAC were investigated. The influence of various factors such as contact time, dosage and pH on BTEX removal by virgin orange peel and egg shell were accessed using the batch adsorption set-up. These were also compared to that of GAC which serves as a benchmark for this study. Further modification (preparation of Activated carbon) of these virgin low-cost adsorbents was also carried out. The batch adsorption result showed that the optimum contact time, dosage and pH for BTEX removal by virgin LCAs were 180 minutes, 0.5g and 7 and that of GAC was 30mintues, 0.2g and 7. The maximum adsorption capacity for total BTEX showed by orange peel and egg shell were 42mg/g and 59mg/g respectively while that of GAC was 864mg/g. The adsorbent preference for adsorbate were in order of X>E>T>B. A comparison of batch and column set-up showed that the batch set-up was more efficient than the column set-up. The isotherm data for the virgin LCA and GAC prove to fit the Freundlich isotherm better than the Langmuir model, which produced n values >1 in case of GAC and n< 1 in case of virgin LCAs; indicating a more appropriate adsorption of BTEX onto the GAC. The adsorption kinetics for the three studied adsorbents were described well by the pseudo-second order, suggesting chemisorption as the rate limiting step. This was further confirmed by desorption study, as low levels of BTEX (<10%) were recovered from the spent adsorbents especially for GAC (<3%). Further activation of the LCAs which was compared to the virgin LCAs, revealed that the virgin LCAs had minor higher adsorption capacity than the activated LCAs. Economic analysis revealed that the total cost required to clean-up 9,600m3 of BTEX contaminated water using LCA was just 2.8% lesser than GAC, a difference which could be considered negligible. However, this area still requires a more detailed cost-benefit analysis, and if similar conclusions are reached; a low-cost adsorbent, easy to obtain are still promising adsorbents for BTEX removal from aqueous solution; however, the GAC are still more superior to these materials.

Keywords: activated carbon, BTEX removal, low cost adsorbents, water treatment

Procedia PDF Downloads 241
546 Understanding Walkability in the Libyan Urban Space: Policies, Perceptions and Smart Design for Sustainable Tripoli

Authors: A. Abdulla Khairi Mohamed, Mohamed Gamal Abdelmonem, Gehan Selim

Abstract:

Walkability in civic and public spaces in Libyan cities is challenging due to the lack of accessibility design, informal merging into car traffic, and the general absence of adequate urban and space planning. The lack of accessible and pedestrian-friendly public spaces in Libyan cities has emerged as a major concern for the government if it is to develop smart and sustainable spaces for the 21st century. A walkable urban space has become a driver for urban development and redistribution of land use to ensure pedestrian and walkable routes between sites of living and workplaces. The characteristics of urban open space in the city centre play a main role in attracting people to walk when attending their daily needs, recreation and daily sports. There is significant gap in the understanding of perceptions, feasibility and capabilities of Libyan urban space to accommodate enhance or support the smart design of a walkable pedestrian-friendly environment that is safe and accessible to everyone. The paper aims to undertake observations of walkability and walkable space in the city of Tripoli as a benchmark for Libyan cities; assess the validity and consistency of the seven principal aspects of smart design, safety, accessibility and 51 factors that affect the walkability in open urban space in Tripoli, through the analysis of 10 local urban spaces experts (town planner, architect, transport engineer and urban designer); and explore user groups’ perceptions of accessibility in walkable spaces in Libyan cities through questionnaires. The study sampled 200 respondents in 2015-16. The results of this study are useful for urban planning, to classify the walkable urban space elements which affect to improve the level of walkability in the Libyan cities and create sustainable and liveable urban spaces.

Keywords: walkability, sustainability, liveability, accessibility

Procedia PDF Downloads 412
545 A Method for Rapid Evaluation of Ore Breakage Parameters from Core Images

Authors: A. Nguyen, K. Nguyen, J. Jackson, E. Manlapig

Abstract:

With the recent advancement in core imaging systems, a large volume of high resolution drill core images can now be collected rapidly. This paper presents a method for rapid prediction of ore-specific breakage parameters from high resolution mineral classified core images. The aim is to allow for a rapid assessment of the variability in ore hardness within a mineral deposit with reduced amount of physical breakage tests. This method sees its application primarily in project evaluation phase, where proper evaluation of the variability in ore hardness of the orebody normally requires prolong and costly metallurgical test work program. Applying this image-based texture analysis method on mineral classified core images, the ores are classified according to their textural characteristics. A small number of physical tests are performed to produce a dataset used for developing the relationship between texture classes and measured ore hardness. The paper also presents a case study in which this method has been applied on core samples from a copper porphyry deposit to predict the ore-specific breakage A*b parameter, obtained from JKRBT tests.

Keywords: geometallurgy, hyperspectral drill core imaging, process simulation, texture analysis

Procedia PDF Downloads 337
544 Prediction Study of a Corroded Pressure Vessel Using Evaluation Measurements and Finite Element Analysis

Authors: Ganbat Danaa, Chuluundorj Puntsag

Abstract:

The steel structures of the Oyu-Tolgoi mining Concentrator plant are corroded during operation, which raises doubts about the continued use of some important structures of the plant, which is one of the problems facing the plant's regular operation. As a part of the main operation of the plant, the bottom part of the pressure vessel, which plays an important role in the reliable operation of the concentrate filter-drying unit, was heavily corroded, so it was necessary to study by engineering calculations, modeling, and simulation using modern advanced engineering programs and methods. The purpose of this research is to investigate whether the corroded part of the pressure vessel can be used normally in the future using advanced engineering software and to predetermine the remaining life of the time of the pressure vessel based on engineering calculations. When the thickness of the bottom part of the pressure vessel was thinned by 0.5mm due to corrosion detected by non-destructive testing, finite element analysis using ANSYS WorkBench software was used to determine the mechanical stress, strain and safety factor in the wall and bottom of the pressure vessel operating under 2.2 MPa working pressure, made conclusions on whether it can be used in the future. According to the recommendations, by using sand-blast cleaning and anti-corrosion paint, the normal, continuous and reliable operation of the Concentrator plant can be ensured, such as ordering new pressure vessels and reducing the installation period. By completing this research work, it will be used as a benchmark for assessing the corrosion condition of steel parts of pressure vessels and other metallic and non-metallic structures operating under severe conditions of corrosion, static and dynamic loads, and other deformed steels to make analysis of the structures and make it possible to evaluate and control the integrity and reliable operation of the structures.

Keywords: corrosion, non-destructive testing, finite element analysis, safety factor, structural reliability

Procedia PDF Downloads 38
543 The Use of Boosted Multivariate Trees in Medical Decision-Making for Repeated Measurements

Authors: Ebru Turgal, Beyza Doganay Erdogan

Abstract:

Machine learning aims to model the relationship between the response and features. Medical decision-making researchers would like to make decisions about patients’ course and treatment, by examining the repeated measurements over time. Boosting approach is now being used in machine learning area for these aims as an influential tool. The aim of this study is to show the usage of multivariate tree boosting in this field. The main reason for utilizing this approach in the field of decision-making is the ease solutions of complex relationships. To show how multivariate tree boosting method can be used to identify important features and feature-time interaction, we used the data, which was collected retrospectively from Ankara University Chest Diseases Department records. Dataset includes repeated PF ratio measurements. The follow-up time is planned for 120 hours. A set of different models is tested. In conclusion, main idea of classification with weighed combination of classifiers is a reliable method which was shown with simulations several times. Furthermore, time varying variables will be taken into consideration within this concept and it could be possible to make accurate decisions about regression and survival problems.

Keywords: boosted multivariate trees, longitudinal data, multivariate regression tree, panel data

Procedia PDF Downloads 186
542 Analysis and Prediction of COVID-19 by Using Recurrent LSTM Neural Network Model in Machine Learning

Authors: Grienggrai Rajchakit

Abstract:

As we all know that coronavirus is announced as a pandemic in the world by WHO. It is speeded all over the world with few days of time. To control this spreading, every citizen maintains social distance and self-preventive measures are the best strategies. As of now, many researchers and scientists are continuing their research in finding out the exact vaccine. The machine learning model finds that the coronavirus disease behaves in an exponential manner. To abolish the consequence of this pandemic, an efficient step should be taken to analyze this disease. In this paper, a recurrent neural network model is chosen to predict the number of active cases in a particular state. To make this prediction of active cases, we need a database. The database of COVID-19 is downloaded from the KAGGLE website and is analyzed by applying a recurrent LSTM neural network with univariant features to predict the number of active cases of patients suffering from the corona virus. The downloaded database is divided into training and testing the chosen neural network model. The model is trained with the training data set and tested with a testing dataset to predict the number of active cases in a particular state; here, we have concentrated on Andhra Pradesh state.

Keywords: COVID-19, coronavirus, KAGGLE, LSTM neural network, machine learning

Procedia PDF Downloads 140
541 A Case Study on Performance of Isolated Bridges under Near-Fault Ground Motion

Authors: Daniele Losanno, H. A. Hadad, Giorgio Serino

Abstract:

This paper presents a numerical investigation on the seismic performance of a benchmark bridge with different optimal isolation systems under near fault ground motion. Usually, very large displacements make seismic isolation an unfeasible solution due to boundary conditions, especially in case of existing bridges or high risk seismic regions. Hence, near-fault ground motions are most likely to affect either structures with long natural period range like isolated structures or structures sensitive to velocity content such as viscously damped structures. The work is aimed at analyzing the seismic performance of a three-span continuous bridge designed with different isolation systems having different levels of damping. The case study was analyzed in different configurations including: (a) simply supported, (b) isolated with lead rubber bearings (LRBs), (c) isolated with rubber isolators and 10% classical damping (HDLRBs), and (d) isolated with rubber isolators and 70% supplemental damping ratio. Case (d) represents an alternative control strategy that combines the effect of seismic isolation with additional supplemental damping trying to take advantages from both solutions. The bridge is modeled in SAP2000 and solved by time history direct-integration analyses under a set of six recorded near-fault ground motions. In addition to this, a set of analysis under Italian code provided seismic action is also conducted, in order to evaluate the effectiveness of the suggested optimal control strategies under far field seismic action. Results of the analysis demonstrated that an isolated bridge equipped with HDLRBs and a total equivalent damping ratio of 70% represents a very effective design solution for both mitigation of displacement demand at the isolation level and base shear reduction in the piers also in case of near fault ground motion.

Keywords: isolated bridges, near-fault motion, seismic response, supplemental damping, optimal design

Procedia PDF Downloads 265
540 Breast Cancer Diagnosing Based on Online Sequential Extreme Learning Machine Approach

Authors: Musatafa Abbas Abbood Albadr, Masri Ayob, Sabrina Tiun, Fahad Taha Al-Dhief, Mohammad Kamrul Hasan

Abstract:

Breast Cancer (BC) is considered one of the most frequent reasons of cancer death in women between 40 to 55 ages. The BC is diagnosed by using digital images of the FNA (Fine Needle Aspirate) for both benign and malignant tumors of the breast mass. Therefore, this work proposes the Online Sequential Extreme Learning Machine (OSELM) algorithm for diagnosing BC by using the tumor features of the breast mass. The current work has used the Wisconsin Diagnosis Breast Cancer (WDBC) dataset, which contains 569 samples (i.e., 357 samples for benign class and 212 samples for malignant class). Further, numerous measurements of assessment were used in order to evaluate the proposed OSELM algorithm, such as specificity, precision, F-measure, accuracy, G-mean, MCC, and recall. According to the outcomes of the experiment, the highest performance of the proposed OSELM was accomplished with 97.66% accuracy, 98.39% recall, 95.31% precision, 97.25% specificity, 96.83% F-measure, 95.00% MCC, and 96.84% G-Mean. The proposed OSELM algorithm demonstrates promising results in diagnosing BC. Besides, the performance of the proposed OSELM algorithm was superior to all its comparatives with respect to the rate of classification.

Keywords: breast cancer, machine learning, online sequential extreme learning machine, artificial intelligence

Procedia PDF Downloads 96
539 Statistical Discrimination of Blue Ballpoint Pen Inks by Diamond Attenuated Total Reflectance (ATR) FTIR

Authors: Mohamed Izzharif Abdul Halim, Niamh Nic Daeid

Abstract:

Determining the source of pen inks used on a variety of documents is impartial for forensic document examiners. The examination of inks is often performed to differentiate between inks in order to evaluate the authenticity of a document. A ballpoint pen ink consists of synthetic dyes in (acidic and/or basic), pigments (organic and/or inorganic) and a range of additives. Inks of similar color may consist of different composition and are frequently the subjects of forensic examinations. This study emphasizes on blue ballpoint pen inks available in the market because it is reported that approximately 80% of questioned documents analysis involving ballpoint pen ink. Analytical techniques such as thin layer chromatography, high-performance liquid chromatography, UV-vis spectroscopy, luminescence spectroscopy and infrared spectroscopy have been used in the analysis of ink samples. In this study, application of Diamond Attenuated Total Reflectance (ATR) FTIR is straightforward but preferable in forensic science as it offers no sample preparation and minimal analysis time. The data obtained from these techniques were further analyzed using multivariate chemometric methods which enable extraction of more information based on the similarities and differences among samples in a dataset. It was indicated that some pens from the same manufactures can be similar in composition, however, discrete types can be significantly different.

Keywords: ATR FTIR, ballpoint, multivariate chemometric, PCA

Procedia PDF Downloads 441
538 Automated Classification of Hypoxia from Fetal Heart Rate Using Advanced Data Models of Intrapartum Cardiotocography

Authors: Malarvizhi Selvaraj, Paul Fergus, Andy Shaw

Abstract:

Uterine contractions produced during labour have the potential to damage the foetus by diminishing the maternal blood flow to the placenta. In order to observe this phenomenon labour and delivery are routinely monitored using cardiotocography monitors. An obstetrician usually makes the diagnosis of foetus hypoxia by interpreting cardiotocography recordings. However, cardiotocography capture and interpretation is time-consuming and subjective, often lead to misclassification that causes damage to the foetus and unnecessary caesarean section. Both of these have a high impact on the foetus and the cost to the national healthcare services. Automatic detection of foetal heart rate may be an objective solution to help to reduce unnecessary medical interventions, as reported in several studies. This paper aim is to provide a system for better identification and interpretation of abnormalities of the fetal heart rate using RStudio. An open dataset of 552 Intrapartum recordings has been filtered with 0.034 Hz filters in an attempt to remove noise while keeping as much of the discriminative data as possible. Features were chosen following an extensive literature review, which concluded with FIGO features such as acceleration, deceleration, mean, variance and standard derivation. The five features were extracted from 552 recordings. Using these features, recordings will be classified either normal or abnormal. If the recording is abnormal, it has got more chances of hypoxia.

Keywords: cardiotocography, foetus, intrapartum, hypoxia

Procedia PDF Downloads 201
537 Self-Attention Mechanism for Target Hiding Based on Satellite Images

Authors: Hao Yuan, Yongjian Shen, Xiangjun He, Yuheng Li, Zhouzhou Zhang, Pengyu Zhang, Minkang Cai

Abstract:

Remote sensing data can provide support for decision-making in disaster assessment or disaster relief. The traditional processing methods of sensitive targets in remote sensing mapping are mainly based on manual retrieval and image editing tools, which are inefficient. Methods based on deep learning for sensitive target hiding are faster and more flexible. But these methods have disadvantages in training time and cost of calculation. This paper proposed a target hiding model Self Attention (SA) Deepfill, which used self-attention modules to replace part of gated convolution layers in image inpainting. By this operation, the calculation amount of the model becomes smaller, and the performance is improved. And this paper adds free-form masks to the model’s training to enhance the model’s universal. The experiment on an open remote sensing dataset proved the efficiency of our method. Moreover, through experimental comparison, the proposed method can train for a longer time without over-fitting. Finally, compared with the existing methods, the proposed model has lower computational weight and better performance.

Keywords: remote sensing mapping, image inpainting, self-attention mechanism, target hiding

Procedia PDF Downloads 108
536 Blood Glucose Level Measurement from Breath Analysis

Authors: Tayyab Hassan, Talha Rehman, Qasim Abdul Aziz, Ahmad Salman

Abstract:

The constant monitoring of blood glucose level is necessary for maintaining health of patients and to alert medical specialists to take preemptive measures before the onset of any complication as a result of diabetes. The current clinical monitoring of blood glucose uses invasive methods repeatedly which are uncomfortable and may result in infections in diabetic patients. Several attempts have been made to develop non-invasive techniques for blood glucose measurement. In this regard, the existing methods are not reliable and are less accurate. Other approaches claiming high accuracy have not been tested on extended dataset, and thus, results are not statistically significant. It is a well-known fact that acetone concentration in breath has a direct relation with blood glucose level. In this paper, we have developed the first of its kind, reliable and high accuracy breath analyzer for non-invasive blood glucose measurement. The acetone concentration in breath was measured using MQ 138 sensor in the samples collected from local hospitals in Pakistan involving one hundred patients. The blood glucose levels of these patients are determined using conventional invasive clinical method. We propose a linear regression classifier that is trained to map breath acetone level to the collected blood glucose level achieving high accuracy.

Keywords: blood glucose level, breath acetone concentration, diabetes, linear regression

Procedia PDF Downloads 155