Search results for: computational accuracy
1049 Improving Fingerprinting-Based Localization System Using Generative AI
Authors: Getaneh Berie Tarekegn
Abstract:
A precise localization system is crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. The most common method for providing continuous positioning services in outdoor environments is by using a global navigation satellite system (GNSS). Due to nonline-of-sight, multipath, and weather conditions, GNSS systems do not perform well in dense urban, urban, and suburban areas.This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. It also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine
Procedia PDF Downloads 591048 Simulation and Characterization of Compact Magnetic Proton Recoil Spectrometer for Fast Neutron Spectra Measurements
Authors: Xingyu Peng, Qingyuan Hu, Xuebin Zhu, Xi Yuan
Abstract:
Neutron spectrometry has contributed much to the development of nuclear physics since 1932 and has also become an importance tool in several other fields, notably nuclear technology, fusion plasma diagnostics and radiation protection. Compared with neutron fluxes, neutron spectra can provide more detailed information on the internal physical process of neutron sources, such as fast neutron reactors, fusion plasma, fission-fusion hybrid reactors, and so on. However, high performance neutron spectrometer is not so commonly available as it requires the use of large and complex instrumentation. This work describes the development and characterization of a compact magnetic proton recoil (MPR) spectrometer for high-resolution measurements of fast neutron spectra. The compact MPR spectrometer is featured by its large recoil angle, small size permanent analysis magnet, short beam transport line and dual-purpose detector array for both steady state and pulsed neutron spectra measurement. A 3-dimensional electromagnetic particle transport code is developed to simulate the response function of the spectrometer. Simulation results illustrate that the performance of the spectrometer is mainly determined by n-p recoil foil and proton apertures, and an overall energy resolution of 3% is achieved for 14 MeV neutrons. Dedicated experiments using alpha source and mono-energetic neutron beam are employed to verify the simulated response function of the compact MPR spectrometer. These experimental results show a good agreement with the simulated ones, which indicates that the simulation code possesses good accuracy and reliability. The compact MPR spectrometer described in this work is a valuable tool for fast neutron spectra measurements for the fission or fusion devices.Keywords: neutron spectrometry, magnetic proton recoil spectrometer, neutron spectra, fast neutron
Procedia PDF Downloads 2021047 Application of a Model-Free Artificial Neural Networks Approach for Structural Health Monitoring of the Old Lidingö Bridge
Authors: Ana Neves, John Leander, Ignacio Gonzalez, Raid Karoumi
Abstract:
Systematic monitoring and inspection are needed to assess the present state of a structure and predict its future condition. If an irregularity is noticed, repair actions may take place and the adequate intervention will most probably reduce the future costs with maintenance, minimize downtime and increase safety by avoiding the failure of the structure as a whole or of one of its structural parts. For this to be possible decisions must be made at the right time, which implies using systems that can detect abnormalities in their early stage. In this sense, Structural Health Monitoring (SHM) is seen as an effective tool for improving the safety and reliability of infrastructures. This paper explores the decision-making problem in SHM regarding the maintenance of civil engineering structures. The aim is to assess the present condition of a bridge based exclusively on measurements using the suggested method in this paper, such that action is taken coherently with the information made available by the monitoring system. Artificial Neural Networks are trained and their ability to predict structural behavior is evaluated in the light of a case study where acceleration measurements are acquired from a bridge located in Stockholm, Sweden. This relatively old bridge is presently still in operation despite experiencing obvious problems already reported in previous inspections. The prediction errors provide a measure of the accuracy of the algorithm and are subjected to further investigation, which comprises concepts like clustering analysis and statistical hypothesis testing. These enable to interpret the obtained prediction errors, draw conclusions about the state of the structure and thus support decision making regarding its maintenance.Keywords: artificial neural networks, clustering analysis, model-free damage detection, statistical hypothesis testing, structural health monitoring
Procedia PDF Downloads 2081046 An Evaluation on the Effectiveness of a 3D Printed Composite Compression Mold
Authors: Peng Hao Wang, Garam Kim, Ronald Sterkenburg
Abstract:
The applications of composite materials within the aviation industry has been increasing at a rapid pace. However, the growing applications of composite materials have also led to growing demand for more tooling to support its manufacturing processes. Tooling and tooling maintenance represents a large portion of the composite manufacturing process and cost. Therefore, the industry’s adaptability to new techniques for fabricating high quality tools quickly and inexpensively will play a crucial role in composite material’s growing popularity in the aviation industry. One popular tool fabrication technique currently being developed involves additive manufacturing such as 3D printing. Although additive manufacturing and 3D printing are not entirely new concepts, the technique has been gaining popularity due to its ability to quickly fabricate components, maintain low material waste, and low cost. In this study, a team of Purdue University School of Aviation and Transportation Technology (SATT) faculty and students investigated the effectiveness of a 3D printed composite compression mold. A 3D printed composite compression mold was fabricated by 3D scanning a steel valve cover of an aircraft reciprocating engine. The 3D printed composite compression mold was used to fabricate carbon fiber versions of the aircraft reciprocating engine valve cover. The 3D printed composite compression mold was evaluated for its performance, durability, and dimensional stability while the fabricated carbon fiber valve covers were evaluated for its accuracy and quality. The results and data gathered from this study will determine the effectiveness of the 3D printed composite compression mold in a mass production environment and provide valuable information for future understanding, improvements, and design considerations of 3D printed composite molds.Keywords: additive manufacturing, carbon fiber, composite tooling, molds
Procedia PDF Downloads 1991045 Influence of Internal Topologies on Components Produced by Selective Laser Melting: Numerical Analysis
Authors: C. Malça, P. Gonçalves, N. Alves, A. Mateus
Abstract:
Regardless of the manufacturing process used, subtractive or additive, material, purpose and application, produced components are conventionally solid mass with more or less complex shape depending on the production technology selected. Aspects such as reducing the weight of components, associated with the low volume of material required and the almost non-existent material waste, speed and flexibility of production and, primarily, a high mechanical strength combined with high structural performance, are competitive advantages in any industrial sector, from automotive, molds, aviation, aerospace, construction, pharmaceuticals, medicine and more recently in human tissue engineering. Such features, properties and functionalities are attained in metal components produced using the additive technique of Rapid Prototyping from metal powders commonly known as Selective Laser Melting (SLM), with optimized internal topologies and varying densities. In order to produce components with high strength and high structural and functional performance, regardless of the type of application, three different internal topologies were developed and analyzed using numerical computational tools. The developed topologies were numerically submitted to mechanical compression and four point bending testing. Finite Element Analysis results demonstrate how different internal topologies can contribute to improve mechanical properties, even with a high degree of porosity relatively to fully dense components. Results are very promising not only from the point of view of mechanical resistance, but especially through the achievement of considerable variation in density without loss of structural and functional high performance.Keywords: additive manufacturing, internal topologies, porosity, rapid prototyping, selective laser melting
Procedia PDF Downloads 3311044 Minimizing the Drilling-Induced Damage in Fiber Reinforced Polymeric Composites
Authors: S. D. El Wakil, M. Pladsen
Abstract:
Fiber reinforced polymeric (FRP) composites are finding wide-spread industrial applications because of their exceptionally high specific strength and specific modulus of elasticity. Nevertheless, it is very seldom to get ready-for-use components or products made of FRP composites. Secondary processing by machining, particularly drilling, is almost always required to make holes for fastening components together to produce assemblies. That creates problems since the FRP composites are neither homogeneous nor isotropic. Some of the problems that are encountered include the subsequent damage in the region around the drilled hole and the drilling – induced delamination of the layer of ply, that occurs both at the entrance and the exit planes of the work piece. Evidently, the functionality of the work piece would be detrimentally affected. The current work was carried out with the aim of eliminating or at least minimizing the work piece damage associated with drilling of FPR composites. Each test specimen involves a woven reinforced graphite fiber/epoxy composite having a thickness of 12.5 mm (0.5 inch). A large number of test specimens were subjected to drilling operations with different combinations of feed rates and cutting speeds. The drilling induced damage was taken as the absolute value of the difference between the drilled hole diameter and the nominal one taken as a percentage of the nominal diameter. The later was determined for each combination of feed rate and cutting speed, and a matrix comprising those values was established, where the columns indicate varying feed rate while and rows indicate varying cutting speeds. Next, the analysis of variance (ANOVA) approach was employed using Minitab software, in order to obtain the combination that would improve the drilling induced damage. Experimental results show that low feed rates coupled with low cutting speeds yielded the best results.Keywords: drilling of composites, dimensional accuracy of holes drilled in composites, delamination and charring, graphite-epoxy composites
Procedia PDF Downloads 3891043 In Silico Study of Antiviral Drugs Against Three Important Proteins of Sars-Cov-2 Using Molecular Docking Method
Authors: Alireza Jalalvand, Maryam Saleh, Somayeh Behjat Khatouni, Zahra Bahri Najafi, Foroozan Fatahinia, Narges Ismailzadeh, Behrokh Farahmand
Abstract:
Object: In the last two decades, the recent outbreak of Coronavirus (SARS-CoV-2) imposed a global pandemic in the world. Despite the increasing prevalence of the disease, there are no effective drugs to treat it. A suitable and rapid way to afford an effective drug and treat the global pandemic is a computational drug study. This study used molecular docking methods to examine the potential inhibition of over 50 antiviral drugs against three fundamental proteins of SARS-CoV-2. METHODS: Through a literature review, three important proteins (a key protease, RNA-dependent RNA polymerase (RdRp), and spike) were selected as drug targets. Three-dimensional (3D) structures of protease, spike, and RdRP proteins were obtained from the Protein Data Bank. Protein had minimal energy. Over 50 antiviral drugs were considered candidates for protein inhibition and their 3D structures were obtained from drug banks. The Autodock 4.2 software was used to define the molecular docking settings and run the algorithm. RESULTS: Five drugs, including indinavir, lopinavir, saquinavir, nelfinavir, and remdesivir, exhibited the highest inhibitory potency against all three proteins based on the binding energies and drug binding positions deduced from docking and hydrogen-bonding analysis. Conclusions: According to the results, among the drugs mentioned, saquinavir and lopinavir showed the highest inhibitory potency against all three proteins compared to other drugs. It may enter laboratory phase studies as a dual-drug treatment to inhibit SARS-CoV-2.Keywords: covid-19, drug repositioning, molecular docking, lopinavir, saquinavir
Procedia PDF Downloads 881042 Floor Response Spectra of RC Frames: Influence of the Infills on the Seismic Demand on Non-Structural Components
Authors: Gianni Blasi, Daniele Perrone, Maria Antonietta Aiello
Abstract:
The seismic vulnerability of non-structural components is nowadays recognized to be a key issue in performance-based earthquake engineering. Recent loss estimation studies, as well as the damage observed during past earthquakes, evidenced how non-structural damage represents the highest rate of economic loss in a building and can be in many cases crucial in a life-safety view during the post-earthquake emergency. The procedures developed to evaluate the seismic demand on non-structural components have been constantly improved and recent studies demonstrated how the existing formulations provided by main Standards generally ignore features which have a sensible influence on the definition of the seismic acceleration/displacements subjecting non-structural components. Since the influence of the infills on the dynamic behaviour of RC structures has already been evidenced by many authors, it is worth to be noted that the evaluation of the seismic demand on non-structural components should consider the presence of the infills as well as their mechanical properties. This study focuses on the evaluation of time-history floor acceleration in RC buildings; which is a useful mean to perform seismic vulnerability analyses of non-structural components through the well-known cascade method. Dynamic analyses are performed on an 8-storey RC frame, taking into account the presence of the infills; the influence of the elastic modulus of the panel on the results is investigated as well as the presence of openings. Floor accelerations obtained from the analyses are used to evaluate the floor response spectra, in order to define the demand on non-structural components depending on the properties of the infills. Finally, the results are compared with formulations provided by main International Standards, in order to assess the accuracy and eventually define the improvements required according to the results of the present research work.Keywords: floor spectra, infilled RC frames, non-structural components, seismic demand
Procedia PDF Downloads 3261041 3D Numerical Study of Tsunami Loading and Inundation in a Model Urban Area
Authors: A. Bahmanpour, I. Eames, C. Klettner, A. Dimakopoulos
Abstract:
We develop a new set of diagnostic tools to analyze inundation into a model district using three-dimensional CFD simulations, with a view to generating a database against which to test simpler models. A three-dimensional model of Oregon city with different-sized groups of building next to the coastline is used to run calculations of the movement of a long period wave on the shore. The initial and boundary conditions of the off-shore water are set using a nonlinear inverse method based on Eulerian spatial information matching experimental Eulerian time series measurements of water height. The water movement is followed in time, and this enables the pressure distribution on every surface of each building to be followed in a temporal manner. The three-dimensional numerical data set is validated against published experimental work. In the first instance, we use the dataset as a basis to understand the success of reduced models - including 2D shallow water model and reduced 1D models - to predict water heights, flow velocity and forces. This is because models based on the shallow water equations are known to underestimate drag forces after the initial surge of water. The second component is to identify critical flow features, such as hydraulic jumps and choked states, which are flow regions where dissipation occurs and drag forces are large. Finally, we describe how future tsunami inundation models should be modified to account for the complex effects of buildings through drag and blocking.Financial support from UCL and HR Wallingford is greatly appreciated. The authors would like to thank Professor Daniel Cox and Dr. Hyoungsu Park for providing the data on the Seaside Oregon experiment.Keywords: computational fluid dynamics, extreme events, loading, tsunami
Procedia PDF Downloads 1151040 Clinical Relevance of TMPRSS2-ERG Fusion Marker for Prostate Cancer
Authors: Shalu Jain, Anju Bansal, Anup Kumar, Sunita Saxena
Abstract:
Objectives: The novel TMPRSS2:ERG gene fusion is a common somatic event in prostate cancer that in some studies is linked with a more aggressive disease phenotype. Thus, this study aims to determine whether clinical variables are associated with the presence of TMPRSS2:ERG-fusion gene transcript in Indian patients of prostate cancer. Methods: We evaluated the clinical variables with presence and absence of TMPRSS2:ERG gene fusion in prostate cancer and BPH association of clinical patients. Patients referred for prostate biopsy because of abnormal DRE or/and elevated sPSA were enrolled for this prospective clinical study. TMPRSS2:ERG mRNA copies in samples were quantified using a Taqman chemistry by real time PCR assay in prostate biopsy samples (N=42). The T2:ERG assay detects the gene fusion mRNA isoform TMPRSS2 exon1 to ERG exon4. Results: Histopathology report has confirmed 25 cases as prostate cancer adenocarcinoma (PCa) and 17 patients as benign prostate hyperplasia (BPH). Out of 25 PCa cases, 16 (64%) were T2: ERG fusion positive. All 17 BPH controls were fusion negative. The T2:ERG fusion transcript was exclusively specific for prostate cancer as no case of BPH was detected having T2:ERG fusion, showing 100% specificity. The positive predictive value of fusion marker for prostate cancer is thus 100% and the negative predictive value is 65.3%. The T2:ERG fusion marker is significantly associated with clinical variables like no. of positive cores in prostate biopsy, Gleason score, serum PSA, perineural invasion, perivascular invasion and periprostatic fat involvement. Conclusions: Prostate cancer is a heterogeneous disease that may be defined by molecular subtypes such as the TMPRSS2:ERG fusion. In the present prospective study, the T2:ERG quantitative assay demonstrated high specificity for predicting biopsy outcome; sensitivity was similar to the prevalence of T2:ERG gene fusions in prostate tumors. These data suggest that further improvement in diagnostic accuracy could be achieved using a nomogram that combines T2:ERG with other markers and risk factors for prostate cancer.Keywords: prostate cancer, genetic rearrangement, TMPRSS2:ERG fusion, clinical variables
Procedia PDF Downloads 4441039 Optimal Emergency Shipment Policy for a Single-Echelon Periodic Review Inventory System
Authors: Saeed Poormoaied, Zumbul Atan
Abstract:
Emergency shipments provide a powerful mechanism to alleviate the risk of imminent stock-outs and can result in substantial benefits in an inventory system. Customer satisfaction and high service level are immediate consequences of utilizing emergency shipments. In this paper, we consider a single-echelon periodic review inventory system consisting of a single local warehouse, being replenished from a central warehouse with ample capacity in an infinite horizon setting. Since the structure of the optimal policy appears to be complicated, we analyze this problem under an order-up-to-S inventory control policy framework, the (S, T) policy, with the emergency shipment consideration. In each period of the periodic review policy, there is a single opportunity at any point of time for the emergency shipment so that in case of stock-outs, an emergency shipment is requested. The goal is to determine the timing and amount of the emergency shipment during a period (emergency shipment policy) as well as the base stock periodic review policy parameters (replenishment policy). We show that how taking advantage of having an emergency shipment during periods improves the performance of the classical (S, T) policy, especially when fixed and unit emergency shipment costs are small. Investigating the structure of the objective function, we develop an exact algorithm for finding the optimal solution. We also provide a heuristic and an approximation algorithm for the periodic review inventory system problem. The experimental analyses indicate that the heuristic algorithm is computationally more efficient than the approximation algorithm, but in terms of the solution efficiency, the approximation algorithm performs very well. We achieve up to 13% cost savings in the (S, T) policy if we apply the proposed emergency shipment policy. Moreover, our computational results reveal that the approximated solution is often within 0.21% of the globally optimal solution.Keywords: emergency shipment, inventory, periodic review policy, approximation algorithm.
Procedia PDF Downloads 1411038 Fast Bayesian Inference of Multivariate Block-Nearest Neighbor Gaussian Process (NNGP) Models for Large Data
Authors: Carlos Gonzales, Zaida Quiroz, Marcos Prates
Abstract:
Several spatial variables collected at the same location that share a common spatial distribution can be modeled simultaneously through a multivariate geostatistical model that takes into account the correlation between these variables and the spatial autocorrelation. The main goal of this model is to perform spatial prediction of these variables in the region of study. Here we focus on a geostatistical multivariate formulation that relies on sharing common spatial random effect terms. In particular, the first response variable can be modeled by a mean that incorporates a shared random spatial effect, while the other response variables depend on this shared spatial term, in addition to specific random spatial effects. Each spatial random effect is defined through a Gaussian process with a valid covariance function, but in order to improve the computational efficiency when the data are large, each Gaussian process is approximated to a Gaussian random Markov field (GRMF), specifically to the block nearest neighbor Gaussian process (Block-NNGP). This approach involves dividing the spatial domain into several dependent blocks under certain constraints, where the cross blocks allow capturing the spatial dependence on a large scale, while each individual block captures the spatial dependence on a smaller scale. The multivariate geostatistical model belongs to the class of Latent Gaussian Models; thus, to achieve fast Bayesian inference, it is used the integrated nested Laplace approximation (INLA) method. The good performance of the proposed model is shown through simulations and applications for massive data.Keywords: Block-NNGP, geostatistics, gaussian process, GRMF, INLA, multivariate models.
Procedia PDF Downloads 971037 Locating Potential Site for Biomass Power Plant Development in Central Luzon Philippines Using GIS-Based Suitability Analysis
Authors: Bryan M. Baltazar, Marjorie V. Remolador, Klathea H. Sevilla, Imee Saladaga, Loureal Camille Inocencio, Ma. Rosario Concepcion O. Ang
Abstract:
Biomass energy is a traditional source of sustainable energy, which has been widely used in developing countries. The Philippines, specifically Central Luzon, has an abundant source of biomass. Hence, it could supply abundant agricultural residues (rice husks), as feedstock in a biomass power plant. However, locating a potential site for biomass development is a complex process which involves different factors, such as physical, environmental, socio-economic, and risks that are usually diverse and conflicting. Moreover, biomass distribution is highly dispersed geographically. Thus, this study develops an integrated method combining Geographical Information Systems (GIS) and methods for energy planning; Multi-Criteria Decision Analysis (MCDA) and Analytical Hierarchy Process (AHP), for locating suitable site for biomass power plant development in Central Luzon, Philippines by considering different constraints and factors. Using MCDA, a three level hierarchy of factors and constraints was produced, with corresponding weights determined by experts by using AHP. Applying the results, a suitability map for Biomass power plant development in Central Luzon was generated. It showed that the central part of the region has the highest potential for biomass power plant development. It is because of the characteristics of the area such as the abundance of rice fields, with generally flat land surfaces, accessible roads and grid networks, and low risks to flooding and landslide. This study recommends the use of higher accuracy resource maps, and further analysis in selecting the optimum site for biomass power plant development that would account for the cost and transportation of biomass residues.Keywords: analytic hierarchy process, biomass energy, GIS, multi-criteria decision analysis, site suitability analysis
Procedia PDF Downloads 4251036 Scheduling Jobs with Stochastic Processing Times or Due Dates on a Server to Minimize the Number of Tardy Jobs
Authors: H. M. Soroush
Abstract:
The problem of scheduling products and services for on-time deliveries is of paramount importance in today’s competitive environments. It arises in many manufacturing and service organizations where it is desirable to complete jobs (products or services) with different weights (penalties) on or before their due dates. In such environments, schedules should frequently decide whether to schedule a job based on its processing time, due-date, and the penalty for tardy delivery to improve the system performance. For example, it is common to measure the weighted number of late jobs or the percentage of on-time shipments to evaluate the performance of a semiconductor production facility or an automobile assembly line. In this paper, we address the problem of scheduling a set of jobs on a server where processing times or due-dates of jobs are random variables and fixed weights (penalties) are imposed on the jobs’ late deliveries. The goal is to find the schedule that minimizes the expected weighted number of tardy jobs. The problem is NP-hard to solve; however, we explore three scenarios of the problem wherein: (i) both processing times and due-dates are stochastic; (ii) processing times are stochastic and due-dates are deterministic; and (iii) processing times are deterministic and due-dates are stochastic. We prove that special cases of these scenarios are solvable optimally in polynomial time, and introduce efficient heuristic methods for the general cases. Our computational results show that the heuristics perform well in yielding either optimal or near optimal sequences. The results also demonstrate that the stochasticity of processing times or due-dates can affect scheduling decisions. Moreover, the proposed problem is general in the sense that its special cases reduce to some new and some classical stochastic single machine models.Keywords: number of late jobs, scheduling, single server, stochastic
Procedia PDF Downloads 4971035 Soluble CD36 and Cardiovascular Risk in Middle-Aged Subjects
Authors: Mohammad Alkhatatbeh, Nehad Ayoub, Nizar Mhaidat, Nesreen Saadeh, Lisa Lincz
Abstract:
CD36 is involved in the development of atherosclerosis by enhancing macrophage endocytosis of oxidized-low density lipoproteins and foam cell formation. Soluble CD36 (sCD36) was found to be elevated in type 2 diabetic patients and was supposed to act as a marker of insulin resistance and atherosclerosis. In young subjects, sCD36 was associated with cardiovascular risk factors including obesity and hypertriglyceridemia. This study was conducted to further investigate the relationship between plasma sCD36 and cardiovascular risk factors among middle-aged patients with metabolic syndrome (MetS) and healthy controls. SCD36 concentrations were determined by enzyme-linked immunosorbent assays (ELISA) for 41 patients with MetS and 36 healthy controls. Data for other variables were obtained from patients' medical records. SCD36 concentrations were relatively low compared to most other studies and were not significantly different between the MetS group and controls (P-value=0.17). SCD36 was also not correlated with age, body mass index, glucose, lipid profile, serum electrolytes and blood counts. SCD36 was not significantly different between subjects with obesity, hyperglycemia, dyslipidemia, hypertension or cardiovascular disease and those without these abnormalities (P-value > 0.05). The inconsistency between results reported in this study and other studies may be unique to the study population or be a result of the lack of a reliable standardized method for determining absolute sCD36 concentrations. However, further investigations are required to assess CD36 tissue expression in the study population and to assess the accuracy of various commercially available sCD36 ELISA kits. Thus, the availability of a standardized simple sCD36 ELISA that could be performed in any basic laboratory would be more favorable to the specialized flow cytometry methods that detect CD36+ microparticles if it was to be used as a biomarker.Keywords: metabolic syndrome, CD36, cardiovascular risk, obesity, type 2 diabetes mellitus
Procedia PDF Downloads 2661034 Effects of the Air Supply Outlets Geometry on Human Comfort inside Living Rooms: CFD vs. ADPI
Authors: Taher M. Abou-deif, Esmail M. El-Bialy, Essam E. Khalil
Abstract:
The paper is devoted to numerically investigating the influence of the air supply outlets geometry on human comfort inside living looms. A computational fluid dynamics model is developed to examine the air flow characteristics of a room with different supply air diffusers. The work focuses on air flow patterns, thermal behavior in the room with few number of occupants. As an input to the full-scale 3-D room model, a 2-D air supply diffuser model that supplies direction and magnitude of air flow into the room is developed. Air distribution effect on thermal comfort parameters was investigated depending on changing the air supply diffusers type, angles and velocity. Air supply diffusers locations and numbers were also investigated. The pre-processor Gambit is used to create the geometric model with parametric features. Commercially available simulation software “Fluent 6.3” is incorporated to solve the differential equations governing the conservation of mass, three momentum and energy in the processing of air flow distribution. Turbulence effects of the flow are represented by the well-developed two equation turbulence model. In this work, the so-called standard k-ε turbulence model, one of the most widespread turbulence models for industrial applications, was utilized. Basic parameters included in this work are air dry bulb temperature, air velocity, relative humidity and turbulence parameters are used for numerical predictions of indoor air distribution and thermal comfort. The thermal comfort predictions through this work were based on ADPI (Air Diffusion Performance Index),the PMV (Predicted Mean Vote) model and the PPD (Percentage People Dissatisfied) model, the PMV and PPD were estimated using Fanger’s model.Keywords: thermal comfort, Fanger's model, ADPI, energy effeciency
Procedia PDF Downloads 4091033 Quality Assessment of New Zealand Mānuka Honeys Using Hyperspectral Imaging Combined with Deep 1D-Convolutional Neural Networks
Authors: Hien Thi Dieu Truong, Mahmoud Al-Sarayreh, Pullanagari Reddy, Marlon M. Reis, Richard Archer
Abstract:
New Zealand mānuka honey is a honeybee product derived mainly from Leptospermum scoparium nectar. The potent antibacterial activity of mānuka honey derives principally from methylglyoxal (MGO), in addition to the hydrogen peroxide and other lesser activities present in all honey. MGO is formed from dihydroxyacetone (DHA) unique to L. scoparium nectar. Mānuka honey also has an idiosyncratic phenolic profile that is useful as a chemical maker. Authentic mānuka honey is highly valuable, but almost all honey is formed from natural mixtures of nectars harvested by a hive over a time period. Once diluted by other nectars, mānuka honey irrevocably loses value. We aimed to apply hyperspectral imaging to honey frames before bulk extraction to minimise the dilution of genuine mānuka by other honey and ensure authenticity at the source. This technology is non-destructive and suitable for an industrial setting. Chemometrics using linear Partial Least Squares (PLS) and Support Vector Machine (SVM) showed limited efficacy in interpreting chemical footprints due to large non-linear relationships between predictor and predictand in a large sample set, likely due to honey quality variability across geographic regions. Therefore, an advanced modelling approach, one-dimensional convolutional neural networks (1D-CNN), was investigated for analysing hyperspectral data for extraction of biochemical information from honey. The 1D-CNN model showed superior prediction of honey quality (R² = 0.73, RMSE = 2.346, RPD= 2.56) to PLS (R² = 0.66, RMSE = 2.607, RPD= 1.91) and SVM (R² = 0.67, RMSE = 2.559, RPD=1.98). Classification of mono-floral manuka honey from multi-floral and non-manuka honey exceeded 90% accuracy for all models tried. Overall, this study reveals the potential of HSI and deep learning modelling for automating the evaluation of honey quality in frames.Keywords: mānuka honey, quality, purity, potency, deep learning, 1D-CNN, chemometrics
Procedia PDF Downloads 1391032 An Overview of Domain Models of Urban Quantitative Analysis
Authors: Mohan Li
Abstract:
Nowadays, intelligent research technology is more and more important than traditional research methods in urban research work, and this proportion will greatly increase in the next few decades. Frequently such analyzing work cannot be carried without some software engineering knowledge. And here, domain models of urban research will be necessary when applying software engineering knowledge to urban work. In many urban plan practice projects, making rational models, feeding reliable data, and providing enough computation all make indispensable assistance in producing good urban planning. During the whole work process, domain models can optimize workflow design. At present, human beings have entered the era of big data. The amount of digital data generated by cities every day will increase at an exponential rate, and new data forms are constantly emerging. How to select a suitable data set from the massive amount of data, manage and process it has become an ability that more and more planners and urban researchers need to possess. This paper summarizes and makes predictions of the emergence of technologies and technological iterations that may affect urban research in the future, discover urban problems, and implement targeted sustainable urban strategies. They are summarized into seven major domain models. They are urban and rural regional domain model, urban ecological domain model, urban industry domain model, development dynamic domain model, urban social and cultural domain model, urban traffic domain model, and urban space domain model. These seven domain models can be used to guide the construction of systematic urban research topics and help researchers organize a series of intelligent analytical tools, such as Python, R, GIS, etc. These seven models make full use of quantitative spatial analysis, machine learning, and other technologies to achieve higher efficiency and accuracy in urban research, assisting people in making reasonable decisions.Keywords: big data, domain model, urban planning, urban quantitative analysis, machine learning, workflow design
Procedia PDF Downloads 1771031 Faster Pedestrian Recognition Using Deformable Part Models
Authors: Alessandro Preziosi, Antonio Prioletti, Luca Castangia
Abstract:
Deformable part models achieve high precision in pedestrian recognition, but all publicly available implementations are too slow for real-time applications. We implemented a deformable part model algorithm fast enough for real-time use by exploiting information about the camera position and orientation. This implementation is both faster and more precise than alternative DPM implementations. These results are obtained by computing convolutions in the frequency domain and using lookup tables to speed up feature computation. This approach is almost an order of magnitude faster than the reference DPM implementation, with no loss in precision. Knowing the position of the camera with respect to horizon it is also possible prune many hypotheses based on their size and location. The range of acceptable sizes and positions is set by looking at the statistical distribution of bounding boxes in labelled images. With this approach it is not needed to compute the entire feature pyramid: for example higher resolution features are only needed near the horizon. This results in an increase in mean average precision of 5% and an increase in speed by a factor of two. Furthermore, to reduce misdetections involving small pedestrians near the horizon, input images are supersampled near the horizon. Supersampling the image at 1.5 times the original scale, results in an increase in precision of about 4%. The implementation was tested against the public KITTI dataset, obtaining an 8% improvement in mean average precision over the best performing DPM-based method. By allowing for a small loss in precision computational time can be easily brought down to our target of 100ms per image, reaching a solution that is faster and still more precise than all publicly available DPM implementations.Keywords: autonomous vehicles, deformable part model, dpm, pedestrian detection, real time
Procedia PDF Downloads 2811030 Improvement Performances of the Supersonic Nozzles at High Temperature Type Minimum Length Nozzle
Authors: W. Hamaidia, T. Zebbiche
Abstract:
This paper presents the design of axisymmetric supersonic nozzles, in order to accelerate a supersonic flow to the desired Mach number and that having a small weight, in the same time gives a high thrust. The concerned nozzle gives a parallel and uniform flow at the exit section. The nozzle is divided into subsonic and supersonic regions. The supersonic portion is independent to the upstream conditions of the sonic line. The subsonic portion is used to give a sonic flow at the throat. In this case, nozzle gives a uniform and parallel flow at the exit section. It’s named by minimum length Nozzle. The study is done at high temperature, lower than the dissociation threshold of the molecules, in order to improve the aerodynamic performances. Our aim consists of improving the performances both by the increase of exit Mach number and the thrust coefficient and by reduction of the nozzle's mass. The variation of the specific heats with the temperature is considered. The design is made by the Method of Characteristics. The finite differences method with predictor-corrector algorithm is used to make the numerical resolution of the obtained nonlinear algebraic equations. The application is for air. All the obtained results depend on three parameters which are exit Mach number, the stagnation temperature, the chosen mesh in characteristics. A numerical simulation of nozzle through Computational Fluid Dynamics-FASTRAN was done to determine and to confirm the necessary design parameters.Keywords: flux supersonic flow, axisymmetric minimum length nozzle, high temperature, method of characteristics, calorically imperfect gas, finite difference method, trust coefficient, mass of the nozzle, specific heat at constant pressure, air, error
Procedia PDF Downloads 1501029 Ebola Virus Glycoprotein Inhibitors from Natural Compounds: Computer-Aided Drug Design
Authors: Driss Cherqaoui, Nouhaila Ait Lahcen, Ismail Hdoufane, Mehdi Oubahmane, Wissal Liman, Christelle Delaite, Mohammed M. Alanazi
Abstract:
The Ebola virus is a highly contagious and deadly pathogen that causes Ebola virus disease. The Ebola virus glycoprotein (EBOV-GP) is a key factor in viral entry into host cells, making it a critical target for therapeutic intervention. Using a combination of computational approaches, this study focuses on the identification of natural compounds that could serve as potent inhibitors of EBOV-GP. The 3D structure of EBOV-GP was selected, with missing residues modeled, and this structure was minimized and equilibrated. Two large natural compound databases, COCONUT and NPASS, were chosen and filtered based on toxicity risks and Lipinski’s Rule of Five to ensure drug-likeness. Following this, a pharmacophore model, built from 22 reported active inhibitors, was employed to refine the selection of compounds with a focus on structural relevance to known Ebola inhibitors. The filtered compounds were subjected to virtual screening via molecular docking, which identified ten promising candidates (five from each database) with strong binding affinities to EBOV-GP. These compounds were then validated through molecular dynamics simulations to evaluate their binding stability and interactions with the target. The top three compounds from each database were further analyzed using ADMET profiling, confirming their favorable pharmacokinetic properties, stability, and safety. These results suggest that the selected compounds have the potential to inhibit EBOV-GP, offering new avenues for antiviral drug development against the Ebola virus.Keywords: EBOV-GP, Ebola virus glycoprotein, high-throughput drug screening, molecular docking, molecular dynamics, natural compounds, pharmacophore modeling, virtual screening
Procedia PDF Downloads 221028 A Simple and Empirical Refraction Correction Method for UAV-Based Shallow-Water Photogrammetry
Authors: I GD Yudha Partama, A. Kanno, Y. Akamatsu, R. Inui, M. Goto, M. Sekine
Abstract:
The aerial photogrammetry of shallow water bottoms has the potential to be an efficient high-resolution survey technique for shallow water topography, thanks to the advent of convenient UAV and automatic image processing techniques Structure-from-Motion (SfM) and Multi-View Stereo (MVS)). However, it suffers from the systematic overestimation of the bottom elevation, due to the light refraction at the air-water interface. In this study, we present an empirical method to correct for the effect of refraction after the usual SfM-MVS processing, using common software. The presented method utilizes the empirical relation between the measured true depth and the estimated apparent depth to generate an empirical correction factor. Furthermore, this correction factor was utilized to convert the apparent water depth into a refraction-corrected (real-scale) water depth. To examine its effectiveness, we applied the method to two river sites, and compared the RMS errors in the corrected bottom elevations with those obtained by three existing methods. The result shows that the presented method is more effective than the two existing methods: The method without applying correction factor and the method utilizes the refractive index of water (1.34) as correction factor. In comparison with the remaining existing method, which used the additive terms (offset) after calculating correction factor, the presented method performs well in Site 2 and worse in Site 1. However, we found this linear regression method to be unstable when the training data used for calibration are limited. It also suffers from a large negative bias in the correction factor when the apparent water depth estimated is affected by noise, according to our numerical experiment. Overall, the good accuracy of refraction correction method depends on various factors such as the locations, image acquisition, and GPS measurement conditions. The most effective method can be selected by using statistical selection (e.g. leave-one-out cross validation).Keywords: bottom elevation, MVS, river, SfM
Procedia PDF Downloads 2991027 ADP Approach to Evaluate the Blood Supply Network of Ontario
Authors: Usama Abdulwahab, Mohammed Wahab
Abstract:
This paper presents the application of uncapacitated facility location problems (UFLP) and 1-median problems to support decision making in blood supply chain networks. A plethora of factors make blood supply-chain networks a complex, yet vital problem for the regional blood bank. These factors are rapidly increasing demand; criticality of the product; strict storage and handling requirements; and the vastness of the theater of operations. As in the UFLP, facilities can be opened at any of $m$ predefined locations with given fixed costs. Clients have to be allocated to the open facilities. In classical location models, the allocation cost is the distance between a client and an open facility. In this model, the costs are the allocation cost, transportation costs, and inventory costs. In order to address this problem the median algorithm is used to analyze inventory, evaluate supply chain status, monitor performance metrics at different levels of granularity, and detect potential problems and opportunities for improvement. The Euclidean distance data for some Ontario cities (demand nodes) are used to test the developed algorithm. Sitation software, lagrangian relaxation algorithm, and branch and bound heuristics are used to solve this model. Computational experiments confirm the efficiency of the proposed approach. Compared to the existing modeling and solution methods, the median algorithm approach not only provides a more general modeling framework but also leads to efficient solution times in general.Keywords: approximate dynamic programming, facility location, perishable product, inventory model, blood platelet, P-median problem
Procedia PDF Downloads 5061026 A Continuous Real-Time Analytic for Predicting Instability in Acute Care Rapid Response Team Activations
Authors: Ashwin Belle, Bryce Benson, Mark Salamango, Fadi Islim, Rodney Daniels, Kevin Ward
Abstract:
A reliable, real-time, and non-invasive system that can identify patients at risk for hemodynamic instability is needed to aid clinicians in their efforts to anticipate patient deterioration and initiate early interventions. The purpose of this pilot study was to explore the clinical capabilities of a real-time analytic from a single lead of an electrocardiograph to correctly distinguish between rapid response team (RRT) activations due to hemodynamic (H-RRT) and non-hemodynamic (NH-RRT) causes, as well as predict H-RRT cases with actionable lead times. The study consisted of a single center, retrospective cohort of 21 patients with RRT activations from step-down and telemetry units. Through electronic health record review and blinded to the analytic’s output, each patient was categorized by clinicians into H-RRT and NH-RRT cases. The analytic output and the categorization were compared. The prediction lead time prior to the RRT call was calculated. The analytic correctly distinguished between H-RRT and NH-RRT cases with 100% accuracy, demonstrating 100% positive and negative predictive values, and 100% sensitivity and specificity. In H-RRT cases, the analytic detected hemodynamic deterioration with a median lead time of 9.5 hours prior to the RRT call (range 14 minutes to 52 hours). The study demonstrates that an electrocardiogram (ECG) based analytic has the potential for providing clinical decision and monitoring support for caregivers to identify at risk patients within a clinically relevant timeframe allowing for increased vigilance and early interventional support to reduce the chances of continued patient deterioration.Keywords: critical care, early warning systems, emergency medicine, heart rate variability, hemodynamic instability, rapid response team
Procedia PDF Downloads 1431025 Improvements in Transient Testing in The Transient REActor Test (TREAT) with a Choice of Filter
Authors: Harish Aryal
Abstract:
The safe and reliable operation of nuclear reactors has always been one of the topmost priorities in the nuclear industry. Transient testing allows us to understand the time-dependent behavior of the neutron population in response to either a planned change in the reactor conditions or unplanned circumstances. These unforeseen conditions might occur due to sudden reactivity insertions, feedback, power excursions, instabilities, and accidents. To study such behavior, we need transient testing, which is like car crash testing, to estimate the durability and strength of a car design. In nuclear designs, such transient testing can simulate a wide range of accidents due to sudden reactivity insertions and helps to study the feasibility and integrity of the fuel to be used in certain reactor types. This testing involves a high neutron flux environment and real-time imaging technology with advanced instrumentation with appropriate accuracy and resolution to study the fuel slumping behavior. With the aid of transient testing and adequate imaging tools, it is possible to test the safety basis for reactor and fuel designs that serves as a gateway in licensing advanced reactors in the future. To that end, it is crucial to fully understand advanced imaging techniques both analytically and via simulations. This paper presents an innovative method of supporting real-time imaging of fuel pins and other structures during transient testing. The major fuel-motion detection device that is studied in this dissertation is the Hodoscope which requires collimators. This paper provides 1) an MCNP model and simulation of a Transient Reactor Test (TREAT) core with a central fuel element replaced by a slotted fuel element that provides an open path between test samples and a hodoscope detector and 2) a choice of good filter to improve image resolution.Keywords: hodoscope, transient testing, collimators, MCNP, TREAT, hodogram, filters
Procedia PDF Downloads 771024 Measuring the Unmeasurable: A Project of High Risk Families Prediction and Management
Authors: Peifang Hsieh
Abstract:
The prevention of child abuse has aroused serious concerns in Taiwan because of the disparity between the increasing amount of reported child abuse cases that doubled over the past decade and the scarcity of social workers. New Taipei city, with the most population in Taiwan and over 70% of its 4 million citizens are migrant families in which the needs of children can be easily neglected due to insufficient support from relatives and communities, sees urgency for a social support system, by preemptively identifying and outreaching high-risk families of child abuse, so as to offer timely assistance and preventive measure to safeguard the welfare of the children. Big data analysis is the inspiration. As it was clear that high-risk families of child abuse have certain characteristics in common, New Taipei city decides to consolidate detailed background information data from departments of social affairs, education, labor, and health (for example considering status of parents’ employment, health, and if they are imprisoned, fugitives or under substance abuse), to cross-reference for accurate and prompt identification of the high-risk families in need. 'The Service Center for High-Risk Families' (SCHF) was established to integrate data cross-departmentally. By utilizing the machine learning 'random forest method' to build a risk prediction model which can early detect families that may very likely to have child abuse occurrence, the SCHF marks high-risk families red, yellow, or green to indicate the urgency for intervention, so as to those families concerned can be provided timely services. The accuracy and recall rates of the above model were 80% and 65%. This prediction model can not only improve the child abuse prevention process by helping social workers differentiate the risk level of newly reported cases, which may further reduce their major workload significantly but also can be referenced for future policy-making.Keywords: child abuse, high-risk families, big data analysis, risk prediction model
Procedia PDF Downloads 1351023 Upcoming Fight Simulation with Smart Shadow
Authors: Ramiz Kuliev, Fuad Kuliev-Smirnov
Abstract:
The 'Shadow Sparring' training exercise is widely used in the training of boxers and martial artists. The main disadvantage of the usual shadow sparring is that the trainer cannot fully control such training and evaluate its results. During the competition, the athlete, preparing for the upcoming fight, imagines the Shadow (upcoming opponent) in accordance with his own imagination. A ‘Smart-Shadow Sparring’ (SSS) is an innovative version of the ‘Shadow Sparring’. During SSS, the fighter will see the Shadow (virtual opponent that moves, defends, and punches) and understand when he misses the punches from the Shadow. The task of a real athlete is to spar with a virtual one, move around, punch in the direction of unprotected areas of the Shadow and dodge his punches. Moves and punches of Shadow are set up before each training. The system will give the coach full information about virtual sparring: (i) how many and what type of punches has the fighter landed, (ii) accuracy of these punches, (iii) how many and what type of virtual punches (punches of Smart-Shadow) has the fighter missed, etc. SSS will be recorded as animated fighting of two fighters and will help the coach to analyze past training. SSS can be configured to fit the physical and technical characteristics of the next real opponent (size, techniques, speed, missed and landed punches, etc.). This will allow to simulate and rehearse the upcoming fight and improve readiness for the next opponent. For amateur fighters, SSS will be reconfigured several times during a tournament, when the real opponent becomes known. SSS can be used in three versions: (1) Digital Shadow: the athlete will see a Shadow on a monitor (2) VR-Shadow: the athlete will see a Shadow in a VR-glasses (3) Smart Shadow: a Shadow will be controlled by artificial intelligence. These technologies are based on the ‘semi-real simulation’ method. The technology allows coaches to train athletes remotely. Simulation of different opponents will help the athletes better prepare for competition. Repeat rehearsals of the upcoming fight will help improve results. SSS can improve results in Boxing, Taekwondo, Karate, and Fencing. 41 sets of medals will be awarded in these sports at the 2020 Olympic Games.Keywords: boxing, combat sports, fight simulation, shadow sparring
Procedia PDF Downloads 1321022 The Effect of Visual Access to Greenspace and Urban Space on a False Memory Learning Task
Authors: Bryony Pound
Abstract:
This study investigated how views of green or urban space affect learning performance. It provides evidence of the value of visual access to greenspace in work and learning environments, and builds on the extensive research into the cognitive and learning-related benefits of access to green and natural spaces, particularly in learning environments. It demonstrates that benefits of visual access to natural spaces whilst learning can produce statistically significant faster responses than those facing urban views after only 5 minutes. The primary hypothesis of this research was that a greenspace view would improve short-term learning. Participants were randomly assigned to either a view of parkland or of urban buildings from the same room. They completed a psychological test of two stages. The first stage consisted of a presentation of words from eight different categories (four manmade and four natural). Following this a 2.5 minute break was given; participants were not prompted to look out of the window, but all were observed doing so. The second stage of the test involved a word recognition/false memory test of three types. Type 1 was presented words from each category; Type 2 was non-presented words from those same categories; and Type 3 was non-presented words from different categories. Participants were asked to respond with whether they thought they had seen the words before or not. Accuracy of responses and reaction times were recorded. The key finding was that reaction times for Type 2 words (highest difficulty) were significantly different between urban and green view conditions. Those with an urban view had slower reaction times for these words, so a view of greenspace resulted in better information retrieval for word and false memory recognition. Importantly, this difference was found after only 5 minutes of exposure to either view, during winter, and with a sample size of only 26. Greenspace views improve performance in a learning task. This provides a case for better visual access to greenspace in work and learning environments.Keywords: benefits, greenspace, learning, restoration
Procedia PDF Downloads 1271021 TAXAPRO, A Streamlined Pipeline to Analyze Shotgun Metagenomes
Authors: Sofia Sehli, Zainab El Ouafi, Casey Eddington, Soumaya Jbara, Kasambula Arthur Shem, Islam El Jaddaoui, Ayorinde Afolayan, Olaitan I. Awe, Allissa Dillman, Hassan Ghazal
Abstract:
The ability to promptly sequence whole genomes at a relatively low cost has revolutionized the way we study the microbiome. Microbiologists are no longer limited to studying what can be grown in a laboratory and instead are given the opportunity to rapidly identify the makeup of microbial communities in a wide variety of environments. Analyzing whole genome sequencing (WGS) data is a complex process that involves multiple moving parts and might be rather unintuitive for scientists that don’t typically work with this type of data. Thus, to help lower the barrier for less-computationally inclined individuals, TAXAPRO was developed at the first Omics Codeathon held virtually by the African Society for Bioinformatics and Computational Biology (ASBCB) in June 2021. TAXAPRO is an advanced metagenomics pipeline that accurately assembles organelle genomes from whole-genome sequencing data. TAXAPRO seamlessly combines WGS analysis tools to create a pipeline that automatically processes raw WGS data and presents organism abundance information in both a tabular and graphical format. TAXAPRO was evaluated using COVID-19 patient gut microbiome data. Analysis performed by TAXAPRO demonstrated a high abundance of Clostridia and Bacteroidia genera and a low abundance of Proteobacteria genera relative to others in the gut microbiome of patients hospitalized with COVID-19, consistent with the original findings derived using a different analysis methodology. This provides crucial evidence that the TAXAPRO workflow dispenses reliable organism abundance information overnight without the hassle of performing the analysis manually.Keywords: metagenomics, shotgun metagenomic sequence analysis, COVID-19, pipeline, bioinformatics
Procedia PDF Downloads 2211020 Performance Based Seismic Retrofit of Masonry Infiled Reinforced Concrete Frames Using Passive Energy Dissipation Devices
Authors: Alok Madan, Arshad K. Hashmi
Abstract:
The paper presents a plastic analysis procedure based on the energy balance concept for performance based seismic retrofit of multi-story multi-bay masonry infilled reinforced concrete (R/C) frames with a ‘soft’ ground story using passive energy dissipation (PED) devices with the objective of achieving a target performance level of the retrofitted R/C frame for a given seismic hazard level at the building site. The proposed energy based plastic analysis procedure was employed for developing performance based design (PBD) formulations for PED devices for a simulated application in seismic retrofit of existing frame structures designed in compliance with the prevalent standard codes of practice. The PBD formulations developed for PED devices were implemented for simulated seismic retrofit of a representative code-compliant masonry infilled R/C frame with a ‘soft’ ground story using friction dampers as the PED device. Non-linear dynamic analyses of the retrofitted masonry infilled R/C frames is performed to investigate the efficacy and accuracy of the proposed energy based plastic analysis procedure in achieving the target performance level under design level earthquakes. Results of non-linear dynamic analyses demonstrate that the maximum inter-story drifts in the masonry infilled R/C frames with a ‘soft’ ground story that is retrofitted with the friction dampers designed using the proposed PBD formulations are controlled within the target drifts under near-field as well far-field earthquakes.Keywords: energy methods, masonry infilled frame, near-field earthquakes, seismic protection, supplemental damping devices
Procedia PDF Downloads 298