Search results for: sampling algorithms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4990

Search results for: sampling algorithms

4270 Acceleration Techniques of DEM Simulation for Dynamics of Particle Damping

Authors: Masato Saeki

Abstract:

Presented herein is a novel algorithms for calculating the damping performance of particle dampers. The particle damper is a passive vibration control technique and has many practical applications due to simple design. It consists of granular materials constrained to move between two ends in the cavity of a primary vibrating system. The damping effect results from the exchange of momentum during the impact of granular materials against the wall of the cavity. This damping has the advantage of being independent of the environment. Therefore, particle damping can be applied in extreme temperature environments, where most conventional dampers would fail. It was shown experimentally in many papers that the efficiency of the particle dampers is high in the case of resonant vibration. In order to use the particle dampers effectively, it is necessary to solve the equations of motion for each particle, considering the granularity. The discrete element method (DEM) has been found to be effective for revealing the dynamics of particle damping. In this method, individual particles are assumed as rigid body and interparticle collisions are modeled by mechanical elements as springs and dashpots. However, the computational cost is significant since the equation of motion for each particle must be solved at each time step. In order to improve the computational efficiency of the DEM, the new algorithms are needed. In this study, new algorithms are proposed for implementing the high performance DEM. On the assumption that behaviors of the granular particles in the each divided area of the damper container are the same, the contact force of the primary system with all particles can be considered to be equal to the product of the divided number of the damper area and the contact force of the primary system with granular materials per divided area. This convenience makes it possible to considerably reduce the calculation time. The validity of this calculation method was investigated and the calculated results were compared with the experimental ones. This paper also presents the results of experimental studies of the performance of particle dampers. It is shown that the particle radius affect the noise level. It is also shown that the particle size and the particle material influence the damper performance.

Keywords: particle damping, discrete element method (DEM), granular materials, numerical analysis, equivalent noise level

Procedia PDF Downloads 454
4269 Logical-Probabilistic Modeling of the Reliability of Complex Systems

Authors: Sergo Tsiramua, Sulkhan Sulkhanishvili, Elisabed Asabashvili, Lazare Kvirtia

Abstract:

The paper presents logical-probabilistic methods, models and algorithms for reliability assessment of complex systems, based on which a web application for structural analysis and reliability assessment of systems was created. The reliability assessment process included the following stages, which were reflected in the application: 1) Construction of a graphical scheme of the structural reliability of the system; 2) Transformation of the graphic scheme into a logical representation and modeling of the shortest ways of successful functioning of the system; 3) Description of system operability condition with logical function in the form of disjunctive normal form (DNF); 4) Transformation of DNF into orthogonal disjunction normal form (ODNF) using the orthogonalization algorithm; 5) Replacing logical elements with probabilistic elements in ODNF, obtaining a reliability estimation polynomial and quantifying reliability; 6) Calculation of weights of elements. Using the logical-probabilistic methods, models and algorithms discussed in the paper, a special software was created, by means of which a quantitative assessment of the reliability of systems of a complex structure is produced. As a result, structural analysis of systems, research and designing of optimal structure systems are carried out.

Keywords: Complex systems, logical-probabilistic methods, orthogonalization algorithm, reliability, weight of element

Procedia PDF Downloads 74
4268 An MrPPG Method for Face Anti-Spoofing

Authors: Lan Zhang, Cailing Zhang

Abstract:

In recent years, many face anti-spoofing algorithms have high detection accuracy when detecting 2D face anti-spoofing or 3D mask face anti-spoofing alone in the field of face anti-spoofing, but their detection performance is greatly reduced in multidimensional and cross-datasets tests. The rPPG method used for face anti-spoofing uses the unique vital information of real face to judge real faces and face anti-spoofing, so rPPG method has strong stability compared with other methods, but its detection rate of 2D face anti-spoofing needs to be improved. Therefore, in this paper, we improve an rPPG(Remote Photoplethysmography) method(MrPPG) for face anti-spoofing which through color space fusion, using the correlation of pulse signals between real face regions and background regions, and introducing the cyclic neural network (LSTM) method to improve accuracy in 2D face anti-spoofing. Meanwhile, the MrPPG also has high accuracy and good stability in face anti-spoofing of multi-dimensional and cross-data datasets. The improved method was validated on Replay-Attack, CASIA-FASD, Siw and HKBU_MARs_V2 datasets, the experimental results show that the performance and stability of the improved algorithm proposed in this paper is superior to many advanced algorithms.

Keywords: face anti-spoofing, face presentation attack detection, remote photoplethysmography, MrPPG

Procedia PDF Downloads 179
4267 Optimizing Stormwater Sampling Design for Estimation of Pollutant Loads

Authors: Raja Umer Sajjad, Chang Hee Lee

Abstract:

Stormwater runoff is the leading contributor to pollution of receiving waters. In response, an efficient stormwater monitoring program is required to quantify and eventually reduce stormwater pollution. The overall goals of stormwater monitoring programs primarily include the identification of high-risk dischargers and the development of total maximum daily loads (TMDLs). The challenge in developing better monitoring program is to reduce the variability in flux estimates due to sampling errors; however, the success of monitoring program mainly depends on the accuracy of the estimates. Apart from sampling errors, manpower and budgetary constraints also influence the quality of the estimates. This study attempted to develop optimum stormwater monitoring design considering both cost and the quality of the estimated pollutants flux. Three years stormwater monitoring data (2012 – 2014) from a mix land use located within Geumhak watershed South Korea was evaluated. The regional climate is humid and precipitation is usually well distributed through the year. The investigation of a large number of water quality parameters is time-consuming and resource intensive. In order to identify a suite of easy-to-measure parameters to act as a surrogate, Principal Component Analysis (PCA) was applied. Means, standard deviations, coefficient of variation (CV) and other simple statistics were performed using multivariate statistical analysis software SPSS 22.0. The implication of sampling time on monitoring results, number of samples required during the storm event and impact of seasonal first flush were also identified. Based on the observations derived from the PCA biplot and the correlation matrix, total suspended solids (TSS) was identified as a potential surrogate for turbidity, total phosphorus and for heavy metals like lead, chromium, and copper whereas, Chemical Oxygen Demand (COD) was identified as surrogate for organic matter. The CV among different monitored water quality parameters were found higher (ranged from 3.8 to 15.5). It suggests that use of grab sampling design to estimate the mass emission rates in the study area can lead to errors due to large variability. TSS discharge load calculation error was found only 2 % with two different sample size approaches; i.e. 17 samples per storm event and equally distributed 6 samples per storm event. Both seasonal first flush and event first flush phenomena for most water quality parameters were observed in the study area. Samples taken at the initial stage of storm event generally overestimate the mass emissions; however, it was found that collecting a grab sample after initial hour of storm event more closely approximates the mean concentration of the event. It was concluded that site and regional climate specific interventions can be made to optimize the stormwater monitoring program in order to make it more effective and economical.

Keywords: first flush, pollutant load, stormwater monitoring, surrogate parameters

Procedia PDF Downloads 241
4266 Quantum Cryptography: Classical Cryptography Algorithms’ Vulnerability State as Quantum Computing Advances

Authors: Tydra Preyear, Victor Clincy

Abstract:

Quantum computing presents many computational advantages over classical computing methods due to the utilization of quantum mechanics. The capability of this computing infrastructure poses threats to standard cryptographic systems such as RSA and AES, which are designed for classical computing environments. This paper discusses the impact that quantum computing has on cryptography, while focusing on the evolution from classical cryptographic concepts to quantum and post-quantum cryptographic concepts. Standard Cryptography is essential for securing data by utilizing encryption and decryption methods, and these methods face vulnerability problems due to the advancement of quantum computing. In order to counter these vulnerabilities, the methods that are proposed are quantum cryptography and post-quantum cryptography. Quantum cryptography uses principles such as the uncertainty principle and photon polarization in order to provide secure data transmission. In addition, the concept of Quantum key distribution is introduced to ensure more secure communication channels by distributing cryptographic keys. There is the emergence of post-quantum cryptography which is used for improving cryptographic algorithms in order to be more secure from attacks by classical and quantum computers. Throughout this exploration, the paper mentions the critical role of the advancement of cryptographic methods to keep data integrity and privacy safe from quantum computing concepts. Future research directions that would be discussed would be more effective cryptographic methods through the advancement of technology.

Keywords: quantum computing, quantum cryptography, cryptography, data integrity and privacy

Procedia PDF Downloads 27
4265 An Occupational Health Risk Assessment for Exposure to Benzene, Toluene, Ethylbenzene and Xylenes: A Case Study of Informal Traders in a Metro Centre (Taxi Rank) in South Africa

Authors: Makhosazana Dubazana

Abstract:

Many South Africans commuters use minibus taxis daily and are connected to the informal transport network through metro centres informally known as Taxi Ranks. Taxi ranks form part of an economic nexus for many informal traders, connecting them to commuters, their prime clientele. They work along designated areas along the periphery of the taxi rank and in between taxi lanes. Informal traders are therefore at risk of adverse health effects associated with the inhalation of exhaust fumes from minibus taxis. Of the exhaust emissions, benzene, toluene, ethylbenzene and xylenes (BTEX) have high toxicity. Purpose: The purpose of this study was to conduct a Human Health Risk Assessment for informal traders, looking at their exposure to BTEX compounds. Methods: The study was conducted in a subsection of a taxi rank which is representative of the entire taxi rank. This subsection has a daily average of 400 minibus taxi moving through it and an average of 60 informal traders working in it. In the health risk assessment, a questionnaire was conducted to understand the occupational behaviour of the informal traders. This was used to deduce the exposure scenarios and sampling locations. Three sampling campaigns were run for an average of 10 hours each covering the average working hours of traders. A gas chronographer was used for collecting continues ambient air samples at 15 min intervals. Results: Over the three sampling days, the average concentrations were, 8.46ppb, 0.63 ppb, 1.27ppb and 1.0ppb for benzene, toluene, ethylbenzene, and xylene respectively. The average cancer risk is 9.46E-03. In several cases, they were incidences of unacceptable risk for the cumulative exposure of all four BTEX compounds. Conclusion: This study adds to the body of knowledge on the Human Health Risk effects of urban BTEX pollution, furthermore focusing on the impact of urban BTEX on high risk personal such as informal traders, in Southern Africa.

Keywords: human health risk assessment, informal traders, occupational risk, urban BTEX

Procedia PDF Downloads 233
4264 Identification of Biological Pathways Causative for Breast Cancer Using Unsupervised Machine Learning

Authors: Karthik Mittal

Abstract:

This study performs an unsupervised machine learning analysis to find clusters of related SNPs which highlight biological pathways that are important for the biological mechanisms of breast cancer. Studying genetic variations in isolation is illogical because these genetic variations are known to modulate protein production and function; the downstream effects of these modifications on biological outcomes are highly interconnected. After extracting the SNPs and their effect on different types of breast cancer using the MRBase library, two unsupervised machine learning clustering algorithms were implemented on the genetic variants: a k-means clustering algorithm and a hierarchical clustering algorithm; furthermore, principal component analysis was executed to visually represent the data. These algorithms specifically used the SNP’s beta value on the three different types of breast cancer tested in this project (estrogen-receptor positive breast cancer, estrogen-receptor negative breast cancer, and breast cancer in general) to perform this clustering. Two significant genetic pathways validated the clustering produced by this project: the MAPK signaling pathway and the connection between the BRCA2 gene and the ESR1 gene. This study provides the first proof of concept showing the importance of unsupervised machine learning in interpreting GWAS summary statistics.

Keywords: breast cancer, computational biology, unsupervised machine learning, k-means, PCA

Procedia PDF Downloads 146
4263 Semiautomatic Calculation of Ejection Fraction Using Echocardiographic Image Processing

Authors: Diana Pombo, Maria Loaiza, Mauricio Quijano, Alberto Cadena, Juan Pablo Tello

Abstract:

In this paper, we present a semi-automatic tool for calculating ejection fraction from an echocardiographic video signal which is derived from a database in DICOM format, of Clinica de la Costa - Barranquilla. Described in this paper are each of the steps and methods used to find the respective calculation that includes acquisition and formation of the test samples, processing and finally the calculation of the parameters to obtain the ejection fraction. Two imaging segmentation methods were compared following a methodological framework that is similar only in the initial stages of processing (process of filtering and image enhancement) and differ in the end when algorithms are implemented (Active Contour and Region Growing Algorithms). The results were compared with the measurements obtained by two different medical specialists in cardiology who calculated the ejection fraction of the study samples using the traditional method, which consists of drawing the region of interest directly from the computer using echocardiography equipment and a simple equation to calculate the desired value. The results showed that if the quality of video samples are good (i.e., after the pre-processing there is evidence of an improvement in the contrast), the values provided by the tool are substantially close to those reported by physicians; also the correlation between physicians does not vary significantly.

Keywords: echocardiography, DICOM, processing, segmentation, EDV, ESV, ejection fraction

Procedia PDF Downloads 427
4262 Transparency of Algorithmic Decision-Making: Limits Posed by Intellectual Property Rights

Authors: Olga Kokoulina

Abstract:

Today, algorithms are assuming a leading role in various areas of decision-making. Prompted by a promise to provide increased economic efficiency and fuel solutions for pressing societal challenges, algorithmic decision-making is often celebrated as an impartial and constructive substitute for human adjudication. But in the face of this implied objectivity and efficiency, the application of algorithms is also marred with mounting concerns about embedded biases, discrimination, and exclusion. In Europe, vigorous debates on risks and adverse implications of algorithmic decision-making largely revolve around the potential of data protection laws to tackle some of the related issues. For example, one of the often-cited venues to mitigate the impact of potentially unfair decision-making practice is a so-called 'right to explanation'. In essence, the overall right is derived from the provisions of the General Data Protection Regulation (‘GDPR’) ensuring the right of data subjects to access and mandating the obligation of data controllers to provide the relevant information about the existence of automated decision-making and meaningful information about the logic involved. Taking corresponding rights and obligations in the context of the specific provision on automated decision-making in the GDPR, the debates mainly focus on efficacy and the exact scope of the 'right to explanation'. In essence, the underlying logic of the argued remedy lies in a transparency imperative. Allowing data subjects to acquire as much knowledge as possible about the decision-making process means empowering individuals to take control of their data and take action. In other words, forewarned is forearmed. The related discussions and debates are ongoing, comprehensive, and, often, heated. However, they are also frequently misguided and isolated: embracing the data protection law as ultimate and sole lenses are often not sufficient. Mandating the disclosure of technical specifications of employed algorithms in the name of transparency for and empowerment of data subjects potentially encroach on the interests and rights of IPR holders, i.e., business entities behind the algorithms. The study aims at pushing the boundaries of the transparency debate beyond the data protection regime. By systematically analysing legal requirements and current judicial practice, it assesses the limits of the transparency requirement and right to access posed by intellectual property law, namely by copyrights and trade secrets. It is asserted that trade secrets, in particular, present an often-insurmountable obstacle for realising the potential of the transparency requirement. In reaching that conclusion, the study explores the limits of protection afforded by the European Trade Secrets Directive and contrasts them with the scope of respective rights and obligations related to data access and portability enshrined in the GDPR. As shown, the far-reaching scope of the protection under trade secrecy is evidenced both through the assessment of its subject matter as well as through the exceptions from such protection. As a way forward, the study scrutinises several possible legislative solutions, such as flexible interpretation of the public interest exception in trade secrets as well as the introduction of the strict liability regime in case of non-transparent decision-making.

Keywords: algorithms, public interest, trade secrets, transparency

Procedia PDF Downloads 125
4261 Hybrid Genetic Approach for Solving Economic Dispatch Problems with Valve-Point Effect

Authors: Mohamed I. Mahrous, Mohamed G. Ashmawy

Abstract:

Hybrid genetic algorithm (HGA) is proposed in this paper to determine the economic scheduling of electric power generation over a fixed time period under various system and operational constraints. The proposed technique can outperform conventional genetic algorithms (CGAs) in the sense that HGA make it possible to improve both the quality of the solution and reduce the computing expenses. In contrast, any carefully designed GA is only able to balance the exploration and the exploitation of the search effort, which means that an increase in the accuracy of a solution can only occure at the sacrifice of convergent speed, and vice visa. It is unlikely that both of them can be improved simultaneously. The proposed hybrid scheme is developed in such a way that a simple GA is acting as a base level search, which makes a quick decision to direct the search towards the optimal region, and a local search method (pattern search technique) is next employed to do the fine tuning. The aim of the strategy is to achieve the cost reduction within a reasonable computing time. The effectiveness of the proposed hybrid technique is verified on two real public electricity supply systems with 13 and 40 generator units respectively. The simulation results obtained with the HGA for the two real systems are very encouraging with regard to the computational expenses and the cost reduction of power generation.

Keywords: genetic algorithms, economic dispatch, pattern search

Procedia PDF Downloads 446
4260 Research of Stalled Operational Modes of Axial-Flow Compressor for Diagnostics of Pre-Surge State

Authors: F. Mohammadsadeghi

Abstract:

Relevance of research: Axial compressors are used in both aircraft engine construction and ground-based gas turbine engines. The compressor is considered to be one of the main gas turbine engine units, which define absolute and relative indicators of engine in general. Failure of compressor often leads to drastic consequences. Therefore, safe (stable) operation must be maintained when using axial compressor. Currently, we can observe a tendency of increase of power unit, productivity, circumferential velocity and compression ratio of axial compressors in gas turbine engines of aircraft and ground-based application whereas metal consumption of their structure tends to fall. This causes the increase of dynamic loads as well as danger of damage of high load compressor or engine structure elements in general due to transient processes. In operating practices of aeronautical engineering and ground units with gas turbine drive the operational stability failure of gas turbine engines is one of relatively often failure causes what can lead to emergency situations. Surge occurrence is considered to be an absolute buckling failure. This is one of the most dangerous and often occurring types of instability. However detailed were the researches of this phenomenon the development of measures for surge before-the-fact prevention is still relevant. This is why the research of transient processes for axial compressors is necessary in order to provide efficient, stable and secure operation. The paper addresses the problem of automatic control system improvement by integrating the anti-surge algorithms for axial compressor of aircraft gas turbine engine. Paper considers dynamic exhaustion of gas dynamic stability of compressor stage, results of numerical simulation of airflow flowing through the airfoil at design and stalling modes, experimental researches to form the criteria that identify the compressor state at pre-surge mode detection. Authors formulated basic ways for developing surge preventing systems, i.e. forming the algorithms that allow detecting the surge origination and the systems that implement the proposed algorithms.

Keywords: axial compressor, rotation stall, Surg, unstable operation of gas turbine engine

Procedia PDF Downloads 410
4259 Deep Routing Strategy: Deep Learning based Intelligent Routing in Software Defined Internet of Things.

Authors: Zabeehullah, Fahim Arif, Yawar Abbas

Abstract:

Software Defined Network (SDN) is a next genera-tion networking model which simplifies the traditional network complexities and improve the utilization of constrained resources. Currently, most of the SDN based Internet of Things(IoT) environments use traditional network routing strategies which work on the basis of max or min metric value. However, IoT network heterogeneity, dynamic traffic flow and complexity demands intelligent and self-adaptive routing algorithms because traditional routing algorithms lack the self-adaptions, intelligence and efficient utilization of resources. To some extent, SDN, due its flexibility, and centralized control has managed the IoT complexity and heterogeneity but still Software Defined IoT (SDIoT) lacks intelligence. To address this challenge, we proposed a model called Deep Routing Strategy (DRS) which uses Deep Learning algorithm to perform routing in SDIoT intelligently and efficiently. Our model uses real-time traffic for training and learning. Results demonstrate that proposed model has achieved high accuracy and low packet loss rate during path selection. Proposed model has also outperformed benchmark routing algorithm (OSPF). Moreover, proposed model provided encouraging results during high dynamic traffic flow.

Keywords: SDN, IoT, DL, ML, DRS

Procedia PDF Downloads 113
4258 Adverse Impacts of Poor Wastewater Management Practices on Water Quality in Gebeng Industrial Area, Pahang, Malaysia

Authors: I. M. Sujaul, M. A. Sobahan, A. A. Edriyana, F. M. Yahaya, R. M. Yunus

Abstract:

This study was carried out to investigate the adverse effect of industrial waste water on surface water quality in Gebeng industrial estate, Pahang, Malaysia. Surface water was collected from 6 sampling stations. Physico-chemical parameters were characterized based on in-situ and ex-situ analysis according to standard methods by American Public Health Association (APHA). Selected heavy metals were determined by using Inductively Coupled Plasma Mass Spectrometry (ICP MS). The result reveled that the concentration of heavy metals such as Pb, Cu, Cd, Cr and Hg were high in samples. The result showed that the value of Pb and Hg were higher in the wet season in comparison to dry season. According to Malaysia National Water Quality Standard (NWQS) and Water Quality Index (WQI) all the sampling station were categorized as class IV (highly polluted). The present study reveled that the adverse effects of careless disposal of wastes and directly discharge of effluents affected on surface water quality. Therefore, the authorities should implement the laws to ensure the proper practices of waste water management for environmental sustainability around the study area.

Keywords: water, heavy metals, water quality index, Gebeng

Procedia PDF Downloads 379
4257 Machine Learning Techniques in Seismic Risk Assessment of Structures

Authors: Farid Khosravikia, Patricia Clayton

Abstract:

The main objective of this work is to evaluate the advantages and disadvantages of various machine learning techniques in two key steps of seismic hazard and risk assessment of different types of structures. The first step is the development of ground-motion models, which are used for forecasting ground-motion intensity measures (IM) given source characteristics, source-to-site distance, and local site condition for future events. IMs such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available. Second, it is investigated how machine learning techniques could be beneficial for developing probabilistic seismic demand models (PSDMs), which provide the relationship between the structural demand responses (e.g., component deformations, accelerations, internal forces, etc.) and the ground motion IMs. In the risk framework, such models are used to develop fragility curves estimating exceeding probability of damage for pre-defined limit states, and therefore, control the reliability of the predictions in the risk assessment. In this study, machine learning algorithms like artificial neural network, random forest, and support vector machine are adopted and trained on the demand parameters to derive PSDMs for them. It is observed that such models can provide more accurate estimates of prediction in relatively shorter about of time compared to conventional methods. Moreover, they can be used for sensitivity analysis of fragility curves with respect to many modeling parameters without necessarily requiring more intense numerical response-history analysis.

Keywords: artificial neural network, machine learning, random forest, seismic risk analysis, seismic hazard analysis, support vector machine

Procedia PDF Downloads 106
4256 The Tourist Satisfaction on Brand Identity Design of Creative Agriculture Community Enterprise, Bang Khonthi District, Samut Songkhram Province

Authors: Panupong Chanplin, Kathaleeya Chanda., Wilailuk Mepracha

Abstract:

The aims of this research were twofold: 1) to brand identity design of Creative Agriculture Community Enterprise, Bang Khonthi District, Samut Songkhram Province and 2) to study the level of tourist satisfaction towards brand identity design of Creative Agriculture Community Enterprise, Bang Khonthi District, Samut Songkhram Province. tourist satisfaction was measured using six criteria: clear brand positioning, likeable brand personality, memorable logo, attractive color palette, professional typography and on-brand supporting graphics. The researcher utilized a probability sampling method via simple random sampling. The sample consisted of 30 tourists in the Creative Agriculture Community Enterprise. Statistics utilized for data analysis were percentage, mean, and standard deviation. The results suggest that tourist had high levels of satisfaction towards all six criteria of the brand identity design that was designed to target them. This study proposes that specifically brand identity designed of Creative Agriculture Community Enterprise could also be implemented with other real media already available on the market.

Keywords: satisfaction, brand identity, logo, creative agriculture community enterprise

Procedia PDF Downloads 243
4255 Remote Sensing Approach to Predict the Impacts of Land Use/Land Cover Change on Urban Thermal Comfort Using Machine Learning Algorithms

Authors: Ahmad E. Aldousaria, Abdulla Al Kafy

Abstract:

Urbanization is an incessant process that involves the transformation of land use/land cover (LULC), resulting in a reduction of cool land covers and thermal comfort zones (TCZs). This study explores the directional shrinkage of TCZs in Kuwait using Landsat satellite data from 1991 – 2021 to predict the future LULC and TCZ distribution for 2026 and 2031 using cellular automata (CA) and artificial neural network (ANN) algorithms. Analysis revealed a rapid urban expansion (40 %) in SE, NE, and NW directions and TCZ shrinkage in N – NW and SW directions with 25 % of the very uncomfortable area. The predicted result showed an urban area increase from 44 % in 2021 to 47 % and 52 % in 2026 and 2031, respectively, where uncomfortable zones were found to be concentrated around urban areas and bare lands in N – NE and N – NW directions. This study proposes an effective and sustainable framework to control TCZ shrinkage, including zero soil policies, planned landscape design, manmade water bodies, and rooftop gardens. This study will help urban planners and policymakers to make Kuwait an eco–friendly, functional, and sustainable country.

Keywords: land cover change, thermal environment, green cover loss, machine learning, remote sensing

Procedia PDF Downloads 227
4254 Practices of Entomophagy and Entomotherapy in Baranggay Alambijud, Argao and Baranggay Lusaran, Cebu City, Philippines

Authors: Jake Joshua C. Garces, Zandra O. Jarito, Leslie Ann T. Barriga, Froilen C. Domicelo, Nimfa R. Pansit

Abstract:

The study was conducted in order to discover the medicinal and edible potentialities of different insect species in Baranggay Alambijud, Argao and Baranggay Lusaran, Cebu City, Cebu. In order to identify these entomological practices, a survey was carried out by the researchers in these key sites. Fourteen key informants were obtained and these were identified with the aide of two sampling methods- snowball technique and purposive sampling. Open-ended questionnaires were employed in order to obtain authentic and significant information from the key informants. Results portrayed that in the practice of entomotherapy, two insects were used as medicine namely: migratory locust (Locusta migratoria manillensis) and honey bee (Apis dorsata); and two insect by-products were utilized namely: feces of cockroach (Periplaneta Americana) and honey. White grub (Cotinis nitida) and bee eggs were also documented to manifest edible capability and were thus utilized in the entomophagic practices. After applying thematic analysis, it was determined that the causative factors of their entomological practices include their limited educational attainment, their inability to access urban societies and the influence brought about by their family and community.

Keywords: entomophagy, entomotherapy, entomology, key informants

Procedia PDF Downloads 336
4253 Effectiveness of Adrenal Venous Sampling in the Management of Primary Aldosteronism: Single Centered Cohort Study at a Tertiary Care Hospital in Sri Lanka

Authors: Balasooriya B. M. C. M., Sujeeva N., Thowfeek Z., Siddiqa Omo, Liyanagunawardana J. E., Jayawardana Saiu, Manathunga S. S., Katulanda G. W.

Abstract:

Introduction and objectives: Adrenal venous sampling (AVS) is the gold standard to discriminate unilateral primary aldosteronism (UPA) from bilateral disease (BPA). AVS is technically demanding and only performed in a limited number of centers worldwide. To the best of our knowledge, Except for one study conducted in India, no other research studies on this area have been conducted in South Asia. This study aimed to evaluate the effectiveness of AVS in the management of primary aldosteronism. Methods: A total of 32 patients who underwent AVS at the National Hospital of Sri Lanka from April 2021 to April 2023 were enrolled. Demographic, clinical and laboratory data were obtained retrospectively. A procedure was considered successful when adequate cannulation of both adrenal veins was demonstrated. Cortisol gradient across the adrenal vein (AV) and the peripheral vein was used to establish the success of venous cannulation. Lateralization was determined by the aldosterone gradient between the two sides. Continuous and categorical variables were summarized with mean, SD, and proportions, respectively. The mean and standard deviation of the contralateral suppression index (CSI) were estimated with an intercept-only Bayesian inference model. Results: Of the 32 patients, the average age was 52.47 +26.14 and 19 (59.4%) were males. Both AVs were successfully cannulated in 12 (37.5%). Among them, lateralization was demonstrated in 11(91.7%), and one was diagnosed as a bilateral disease. There were no total failures. Right AV cannulation was unsuccessful in 18 (56.25%), of which lateralization was demonstrated in 9 (50%), and others were inconclusive. Left AV cannulation was unsuccessful only in 2 (6.25%); one was lateralized, and the other remained inconclusive. The estimated mean of the CSI was 0.33 (89% credible interval 0.11-0.86). Seven patients underwent unilateral adrenalectomy and demonstrated significant improvement in blood pressure during follow-up. Two patients await surgery. Others were treated medically. Conclusions: Despite failure due to procedural difficulties, AVS remained useful in the management of patients with PA. Moreover, the success of the procedure needs experienced hands and advanced equipment to achieve optimal outcomes in PA.

Keywords: adrenal venous sampling, lateralization, contralateral suppression index, primary aldosteronism

Procedia PDF Downloads 66
4252 Conjunctive Management of Surface and Groundwater Resources under Uncertainty: A Retrospective Optimization Approach

Authors: Julius M. Ndambuki, Gislar E. Kifanyi, Samuel N. Odai, Charles Gyamfi

Abstract:

Conjunctive management of surface and groundwater resources is a challenging task due to the spatial and temporal variability nature of hydrology as well as hydrogeology of the water storage systems. Surface water-groundwater hydrogeology is highly uncertain; thus it is imperative that this uncertainty is explicitly accounted for, when managing water resources. Various methodologies have been developed and applied by researchers in an attempt to account for the uncertainty. For example, simulation-optimization models are often used for conjunctive water resources management. However, direct application of such an approach in which all realizations are considered at each iteration of the optimization process leads to a very expensive optimization in terms of computational time, particularly when the number of realizations is large. The aim of this paper, therefore, is to introduce and apply an efficient approach referred to as Retrospective Optimization Approximation (ROA) that can be used for optimizing conjunctive use of surface water and groundwater over a multiple hydrogeological model simulations. This work is based on stochastic simulation-optimization framework using a recently emerged technique of sample average approximation (SAA) which is a sampling based method implemented within the Retrospective Optimization Approximation (ROA) approach. The ROA approach solves and evaluates a sequence of generated optimization sub-problems in an increasing number of realizations (sample size). Response matrix technique was used for linking simulation model with optimization procedure. The k-means clustering sampling technique was used to map the realizations. The methodology is demonstrated through the application to a hypothetical example. In the example, the optimization sub-problems generated were solved and analysed using “Active-Set” core optimizer implemented under MATLAB 2014a environment. Through k-means clustering sampling technique, the ROA – Active Set procedure was able to arrive at a (nearly) converged maximum expected total optimal conjunctive water use withdrawal rate within a relatively few number of iterations (6 to 7 iterations). Results indicate that the ROA approach is a promising technique for optimizing conjunctive water use of surface water and groundwater withdrawal rates under hydrogeological uncertainty.

Keywords: conjunctive water management, retrospective optimization approximation approach, sample average approximation, uncertainty

Procedia PDF Downloads 232
4251 Enhancing Scalability in Ethereum Network Analysis: Methods and Techniques

Authors: Stefan K. Behfar

Abstract:

The rapid growth of the Ethereum network has brought forth the urgent need for scalable analysis methods to handle the increasing volume of blockchain data. In this research, we propose efficient methodologies for making Ethereum network analysis scalable. Our approach leverages a combination of graph-based data representation, probabilistic sampling, and parallel processing techniques to achieve unprecedented scalability while preserving critical network insights. Data Representation: We develop a graph-based data representation that captures the underlying structure of the Ethereum network. Each block transaction is represented as a node in the graph, while the edges signify temporal relationships. This representation ensures efficient querying and traversal of the blockchain data. Probabilistic Sampling: To cope with the vastness of the Ethereum blockchain, we introduce a probabilistic sampling technique. This method strategically selects a representative subset of transactions and blocks, allowing for concise yet statistically significant analysis. The sampling approach maintains the integrity of the network properties while significantly reducing the computational burden. Graph Convolutional Networks (GCNs): We incorporate GCNs to process the graph-based data representation efficiently. The GCN architecture enables the extraction of complex spatial and temporal patterns from the sampled data. This combination of graph representation and GCNs facilitates parallel processing and scalable analysis. Distributed Computing: To further enhance scalability, we adopt distributed computing frameworks such as Apache Hadoop and Apache Spark. By distributing computation across multiple nodes, we achieve a significant reduction in processing time and enhanced memory utilization. Our methodology harnesses the power of parallelism, making it well-suited for large-scale Ethereum network analysis. Evaluation and Results: We extensively evaluate our methodology on real-world Ethereum datasets covering diverse time periods and transaction volumes. The results demonstrate its superior scalability, outperforming traditional analysis methods. Our approach successfully handles the ever-growing Ethereum data, empowering researchers and developers with actionable insights from the blockchain. Case Studies: We apply our methodology to real-world Ethereum use cases, including detecting transaction patterns, analyzing smart contract interactions, and predicting network congestion. The results showcase the accuracy and efficiency of our approach, emphasizing its practical applicability in real-world scenarios. Security and Robustness: To ensure the reliability of our methodology, we conduct thorough security and robustness evaluations. Our approach demonstrates high resilience against adversarial attacks and perturbations, reaffirming its suitability for security-critical blockchain applications. Conclusion: By integrating graph-based data representation, GCNs, probabilistic sampling, and distributed computing, we achieve network scalability without compromising analytical precision. This approach addresses the pressing challenges posed by the expanding Ethereum network, opening new avenues for research and enabling real-time insights into decentralized ecosystems. Our work contributes to the development of scalable blockchain analytics, laying the foundation for sustainable growth and advancement in the domain of blockchain research and application.

Keywords: Ethereum, scalable network, GCN, probabilistic sampling, distributed computing

Procedia PDF Downloads 78
4250 The Asymmetric Proximal Support Vector Machine Based on Multitask Learning for Classification

Authors: Qing Wu, Fei-Yan Li, Heng-Chang Zhang

Abstract:

Multitask learning support vector machines (SVMs) have recently attracted increasing research attention. Given several related tasks, the single-task learning methods trains each task separately and ignore the inner cross-relationship among tasks. However, multitask learning can capture the correlation information among tasks and achieve better performance by training all tasks simultaneously. In addition, the asymmetric squared loss function can better improve the generalization ability of the models on the most asymmetric distributed data. In this paper, we first make two assumptions on the relatedness among tasks and propose two multitask learning proximal support vector machine algorithms, named MTL-a-PSVM and EMTL-a-PSVM, respectively. MTL-a-PSVM seeks a trade-off between the maximum expectile distance for each task model and the closeness of each task model to the general model. As an extension of the MTL-a-PSVM, EMTL-a-PSVM can select appropriate kernel functions for shared information and private information. Besides, two corresponding special cases named MTL-PSVM and EMTLPSVM are proposed by analyzing the asymmetric squared loss function, which can be easily implemented by solving linear systems. Experimental analysis of three classification datasets demonstrates the effectiveness and superiority of our proposed multitask learning algorithms.

Keywords: multitask learning, asymmetric squared loss, EMTL-a-PSVM, classification

Procedia PDF Downloads 137
4249 Truancy and Academic Performance of Colleges of Education Students in South Western Nigeria: Implication for Evaluation

Authors: Oloyede Akinniyi Ojo

Abstract:

This study investigated the relationship between truancy and academic performance of Colleges of Education students in southwestern, Nigeria. It also examined the relationship between College Physical environment and truancy behavior among students. Furthermore, it examined the relationship between male and female students involvement in truancy behavior. Purposive sampling was used to select four colleges of education in south-western Nigeria and 120 students per college were selected from year 3 while stratified sampling was used to select schools and courses. A total of 480 students participated in the study. Three research instruments were used for this study namely: Lecturers Attendance Record, Students Statement of Result and ‘College Environment Questionnaires’ (CEQ). Four research questions guided the study. Data was analyzed using descriptive, Chi-square and T-Test. CEQ was validated by a team of experts in the field of educational evaluation. Test reliability was established at an r=0-74. The study concluded that truancy exist in colleges of education and that there was a significant relationship between truancy and academic performance of male and female truants, the study also revealed that physical environment has so much effect on the truancy behavior of the students, hence the study recommended that effort should be made to provide attractive college environment for effective learning.

Keywords: academic performance, colleges of education, students, truancy

Procedia PDF Downloads 192
4248 Design and Implementation of a Hardened Cryptographic Coprocessor with 128-bit RISC-V Core

Authors: Yashas Bedre Raghavendra, Pim Vullers

Abstract:

This study presents the design and implementation of an abstract cryptographic coprocessor, leveraging AMBA(Advanced Microcontroller Bus Architecture) protocols - APB (Advanced Peripheral Bus) and AHB (Advanced High-performance Bus), to enable seamless integration with the main CPU(Central processing unit) and enhance the coprocessor’s algorithm flexibility. The primary objective is to create a versatile coprocessor that can execute various cryptographic algorithms, including ECC(Elliptic-curve cryptography), RSA(Rivest–Shamir–Adleman), and AES (Advanced Encryption Standard) while providing a robust and secure solution for modern secure embedded systems. To achieve this goal, the coprocessor is equipped with a tightly coupled memory (TCM) for rapid data access during cryptographic operations. The TCM is placed within the coprocessor, ensuring quick retrieval of critical data and optimizing overall performance. Additionally, the program memory is positioned outside the coprocessor, allowing for easy updates and reconfiguration, which enhances adaptability to future algorithm implementations. Direct links are employed instead of DMA(Direct memory access) for data transfer, ensuring faster communication and reducing complexity. The AMBA-based communication architecture facilitates seamless interaction between the coprocessor and the main CPU, streamlining data flow and ensuring efficient utilization of system resources. The abstract nature of the coprocessor allows for easy integration of new cryptographic algorithms in the future. As the security landscape continues to evolve, the coprocessor can adapt and incorporate emerging algorithms, making it a future-proof solution for cryptographic processing. Furthermore, this study explores the addition of custom instructions into RISC-V ISE (Instruction Set Extension) to enhance cryptographic operations. By incorporating custom instructions specifically tailored for cryptographic algorithms, the coprocessor achieves higher efficiency and reduced cycles per instruction (CPI) compared to traditional instruction sets. The adoption of RISC-V 128-bit architecture significantly reduces the total number of instructions required for complex cryptographic tasks, leading to faster execution times and improved overall performance. Comparisons are made with 32-bit and 64-bit architectures, highlighting the advantages of the 128-bit architecture in terms of reduced instruction count and CPI. In conclusion, the abstract cryptographic coprocessor presented in this study offers significant advantages in terms of algorithm flexibility, security, and integration with the main CPU. By leveraging AMBA protocols and employing direct links for data transfer, the coprocessor achieves high-performance cryptographic operations without compromising system efficiency. With its TCM and external program memory, the coprocessor is capable of securely executing a wide range of cryptographic algorithms. This versatility and adaptability, coupled with the benefits of custom instructions and the 128-bit architecture, make it an invaluable asset for secure embedded systems, meeting the demands of modern cryptographic applications.

Keywords: abstract cryptographic coprocessor, AMBA protocols, ECC, RSA, AES, tightly coupled memory, secure embedded systems, RISC-V ISE, custom instructions, instruction count, cycles per instruction

Procedia PDF Downloads 70
4247 Resilient Machine Learning in the Nuclear Industry: Crack Detection as a Case Study

Authors: Anita Khadka, Gregory Epiphaniou, Carsten Maple

Abstract:

There is a dramatic surge in the adoption of machine learning (ML) techniques in many areas, including the nuclear industry (such as fault diagnosis and fuel management in nuclear power plants), autonomous systems (including self-driving vehicles), space systems (space debris recovery, for example), medical surgery, network intrusion detection, malware detection, to name a few. With the application of learning methods in such diverse domains, artificial intelligence (AI) has become a part of everyday modern human life. To date, the predominant focus has been on developing underpinning ML algorithms that can improve accuracy, while factors such as resiliency and robustness of algorithms have been largely overlooked. If an adversarial attack is able to compromise the learning method or data, the consequences can be fatal, especially but not exclusively in safety-critical applications. In this paper, we present an in-depth analysis of five adversarial attacks and three defence methods on a crack detection ML model. Our analysis shows that it can be dangerous to adopt machine learning techniques in security-critical areas such as the nuclear industry without rigorous testing since they may be vulnerable to adversarial attacks. While common defence methods can effectively defend against different attacks, none of the three considered can provide protection against all five adversarial attacks analysed.

Keywords: adversarial machine learning, attacks, defences, nuclear industry, crack detection

Procedia PDF Downloads 159
4246 Quantitative Evaluation of Endogenous Reference Genes for ddPCR under Salt Stress Using a Moderate Halophile

Authors: Qinghua Xing, Noha M. Mesbah, Haisheng Wang, Jun Li, Baisuo Zhao

Abstract:

Droplet digital PCR (ddPCR) is being increasingly adopted for gene detection and quantification because of its higher sensitivity and specificity. According to previous observations and our lab data, it is essential to use endogenous reference genes (RGs) when investigating gene expression at the mRNA level under salt stress. This study aimed to select and validate suitable RGs for gene expression under salt stress using ddPCR. Six candidate RGs were selected based on the tandem mass tag (TMT)-labeled quantitative proteomics of Alkalicoccus halolimnae at four salinities. The expression stability of these candidate genes was evaluated using statistical algorithms (geNorm, NormFinder, BestKeeper and RefFinder). There was a small fluctuation in cycle threshold (Ct) value and copy number of the pdp gene. Its expression stability was ranked in the vanguard of all algorithms, and was the most suitable RG for quantification of expression by both qPCR and ddPCR of A. halolimnae under salt stress. Single RG pdp and RG combinations were used to normalize the expression of ectA, ectB, ectC, and ectD under four salinities. The present study constitutes the first systematic analysis of endogenous RG selection for halophiles responding to salt stress. This work provides a valuable theory and an approach reference of internal control identification for ddPCR-based stress response models.

Keywords: endogenous reference gene, salt stress, ddPCR, RT-qPCR, Alkalicoccus halolimnae

Procedia PDF Downloads 106
4245 The Effect of the Contributory Pension Scheme on Employees’ Performance

Authors: Oladipo Jimoh Ayanda, Fashagba Mathew Olasehinde

Abstract:

Pension is a post retirement benefit paid to employees after retirement to cushion the effects of severance from monthly emoluments. It serves the dual purpose of providing financial succour to retired employees as well as motivating employees currently in service to greater performance on duty. However, the scheme, as operated in Nigeria, is prone to some pitfalls such as delayed and irregular payments, inadequate budgetary provisions, employee sufferings and deaths arising from the rigors of verification exercises, among others. This necessitated the replacement of the old scheme with the contributory pension scheme through an enabling law in 2004. The implementation of the new scheme has its own challenges especially in connection with administration. These challenges pose a fundamental problem of establishing a nexus between pension benefits and work performance which represent the focus of the study. The study objectives were to: determine the effect of contributory pension scheme on employees’ performance. The study population consisted of National Universities Commission recognized public and private universities in the South West Nigeria. Multi-stage sampling method involving stratified sampling and systematic sampling was used in selecting 359 respondents while data were collected through questionnaire administration. The procedure for analyzing the data included descriptive statistic, normal distribution test and cross-tabulation (gamma coefficient). The findings of the study showed that the existence of the scheme positively enhances employees’ performance as indicated by normal distribution test with Z-score (10.169) which is greater than the table value (1.96) at 0.05 level. The study concluded that the scope for enhancing employee current job performance can be quite elastic if future retirement benefits are guaranteed through proper and efficient administration and management of the contributory pension scheme. The study recommended that certain factors such as employers’ commitment which account for different levels of confidence between public and private universities should be looked into in order to improve confidence across board while the provisions of the scheme as they affect the PFAs should be properly monitored to ensure compliance.

Keywords: pension, retirement, performance, employees, benefit

Procedia PDF Downloads 332
4244 Transformational Justice for Employees' Job Satisfaction

Authors: Hassan Barau Singhry

Abstract:

Purpose: Leadership or the absence of it is an important behaviour affecting employees’ job satisfaction. Although, there are many models of leadership, one that stands out in a period of change is the transformational behaviour. The aim of this study is to investigate the role of an organizational justice on the relationship between transformational leadership and employee job satisfaction. The study is based on the assumption that change begins with leaders and leaders should be fair and just. Methodology: A cross-sectional survey through structured questionnaire was employed to collect the data of this study. The population is selected the three tiers of government such as the local, state, and federal governments in Nigeria. The sampling method used in this research is stratified random sampling. 418 middle managers of public organizations respondents to the questionnaire. Multiple regression aided by structural equation modeling was employed to test 4 hypothesized relationships. Finding: The regression results support for the mediating role of organizational justice such as distributive, procedural, interpersonal and informational justice in the link between transformational leadership and job satisfaction. Originality/value: This study adds to the literature of human resource management by empirically validating and integrating transformational leadership behaviour with the four dimensions of organizational justice theory. The study is expected to be beneficial to the top and middle-level administrators as well as theory building and testing.

Keywords: distributive justice, job satisfaction, organizational justice, procedural justice, transformational leadership

Procedia PDF Downloads 175
4243 A Study of Using Multiple Subproblems in Dantzig-Wolfe Decomposition of Linear Programming

Authors: William Chung

Abstract:

This paper is to study the use of multiple subproblems in Dantzig-Wolfe decomposition of linear programming (DW-LP). Traditionally, the decomposed LP consists of one LP master problem and one LP subproblem. The master problem and the subproblem is solved alternatively by exchanging the dual prices of the master problem and the proposals of the subproblem until the LP is solved. It is well known that convergence is slow with a long tail of near-optimal solutions (asymptotic convergence). Hence, the performance of DW-LP highly depends upon the number of decomposition steps. If the decomposition steps can be greatly reduced, the performance of DW-LP can be improved significantly. To reduce the number of decomposition steps, one of the methods is to increase the number of proposals from the subproblem to the master problem. To do so, we propose to add a quadratic approximation function to the LP subproblem in order to develop a set of approximate-LP subproblems (multiple subproblems). Consequently, in each decomposition step, multiple subproblems are solved for providing multiple proposals to the master problem. The number of decomposition steps can be reduced greatly. Note that each approximate-LP subproblem is nonlinear programming, and solving the LP subproblem must faster than solving the nonlinear multiple subproblems. Hence, using multiple subproblems in DW-LP is the tradeoff between the number of approximate-LP subproblems being formed and the decomposition steps. In this paper, we derive the corresponding algorithms and provide some simple computational results. Some properties of the resulting algorithms are also given.

Keywords: approximate subproblem, Dantzig-Wolfe decomposition, large-scale models, multiple subproblems

Procedia PDF Downloads 167
4242 Aggregation Scheduling Algorithms in Wireless Sensor Networks

Authors: Min Kyung An

Abstract:

In Wireless Sensor Networks which consist of tiny wireless sensor nodes with limited battery power, one of the most fundamental applications is data aggregation which collects nearby environmental conditions and aggregates the data to a designated destination, called a sink node. Important issues concerning the data aggregation are time efficiency and energy consumption due to its limited energy, and therefore, the related problem, named Minimum Latency Aggregation Scheduling (MLAS), has been the focus of many researchers. Its objective is to compute the minimum latency schedule, that is, to compute a schedule with the minimum number of timeslots, such that the sink node can receive the aggregated data from all the other nodes without any collision or interference. For the problem, the two interference models, the graph model and the more realistic physical interference model known as Signal-to-Interference-Noise-Ratio (SINR), have been adopted with different power models, uniform-power and non-uniform power (with power control or without power control), and different antenna models, omni-directional antenna and directional antenna models. In this survey article, as the problem has proven to be NP-hard, we present and compare several state-of-the-art approximation algorithms in various models on the basis of latency as its performance measure.

Keywords: data aggregation, convergecast, gathering, approximation, interference, omni-directional, directional

Procedia PDF Downloads 231
4241 Evaluation of Beam Structure Using Non-Destructive Vibration-Based Damage Detection Method

Authors: Bashir Ahmad Aasim, Abdul Khaliq Karimi, Jun Tomiyama

Abstract:

Material aging is one of the vital issues among all the civil, mechanical, and aerospace engineering societies. Sustenance and reliability of concrete, which is the widely used material in the world, is the focal point in civil engineering societies. For few decades, researchers have been able to present some form algorithms that could lead to evaluate a structure globally rather than locally without harming its serviceability and traffic interference. The algorithms could help presenting different methods for evaluating structures non-destructively. In this paper, a non-destructive vibration-based damage detection method is adopted to evaluate two concrete beams, one being in a healthy state while the second one contains a crack on its bottom vicinity. The study discusses that damage in a structure affects modal parameters (natural frequency, mode shape, and damping ratio), which are the function of physical properties (mass, stiffness, and damping). The assessment is carried out to acquire the natural frequency of the sound beam. Next, the vibration response is recorded from the cracked beam. Eventually, both results are compared to know the variation in the natural frequencies of both beams. The study concludes that damage can be detected using vibration characteristics of a structural member considering the decline occurred in the natural frequency of the cracked beam.

Keywords: concrete beam, natural frequency, non-destructive testing, vibration characteristics

Procedia PDF Downloads 112