Search results for: computational grid
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2936

Search results for: computational grid

716 Evaluation of Gesture-Based Password: User Behavioral Features Using Machine Learning Algorithms

Authors: Lakshmidevi Sreeramareddy, Komalpreet Kaur, Nane Pothier

Abstract:

Graphical-based passwords have existed for decades. Their major advantage is that they are easier to remember than an alphanumeric password. However, their disadvantage (especially recognition-based passwords) is the smaller password space, making them more vulnerable to brute force attacks. Graphical passwords are also highly susceptible to the shoulder-surfing effect. The gesture-based password method that we developed is a grid-free, template-free method. In this study, we evaluated the gesture-based passwords for usability and vulnerability. The results of the study are significant. We developed a gesture-based password application for data collection. Two modes of data collection were used: Creation mode and Replication mode. In creation mode (Session 1), users were asked to create six different passwords and reenter each password five times. In replication mode, users saw a password image created by some other user for a fixed duration of time. Three different duration timers, such as 5 seconds (Session 2), 10 seconds (Session 3), and 15 seconds (Session 4), were used to mimic the shoulder-surfing attack. After the timer expired, the password image was removed, and users were asked to replicate the password. There were 74, 57, 50, and 44 users participated in Session 1, Session 2, Session 3, and Session 4 respectfully. In this study, the machine learning algorithms have been applied to determine whether the person is a genuine user or an imposter based on the password entered. Five different machine learning algorithms were deployed to compare the performance in user authentication: namely, Decision Trees, Linear Discriminant Analysis, Naive Bayes Classifier, Support Vector Machines (SVMs) with Gaussian Radial Basis Kernel function, and K-Nearest Neighbor. Gesture-based password features vary from one entry to the next. It is difficult to distinguish between a creator and an intruder for authentication. For each password entered by the user, four features were extracted: password score, password length, password speed, and password size. All four features were normalized before being fed to a classifier. Three different classifiers were trained using data from all four sessions. Classifiers A, B, and C were trained and tested using data from the password creation session and the password replication with a timer of 5 seconds, 10 seconds, and 15 seconds, respectively. The classification accuracies for Classifier A using five ML algorithms are 72.5%, 71.3%, 71.9%, 74.4%, and 72.9%, respectively. The classification accuracies for Classifier B using five ML algorithms are 69.7%, 67.9%, 70.2%, 73.8%, and 71.2%, respectively. The classification accuracies for Classifier C using five ML algorithms are 68.1%, 64.9%, 68.4%, 71.5%, and 69.8%, respectively. SVMs with Gaussian Radial Basis Kernel outperform other ML algorithms for gesture-based password authentication. Results confirm that the shorter the duration of the shoulder-surfing attack, the higher the authentication accuracy. In conclusion, behavioral features extracted from the gesture-based passwords lead to less vulnerable user authentication.

Keywords: authentication, gesture-based passwords, machine learning algorithms, shoulder-surfing attacks, usability

Procedia PDF Downloads 107
715 Continuous Measurement of Spatial Exposure Based on Visual Perception in Three-Dimensional Space

Authors: Nanjiang Chen

Abstract:

In the backdrop of expanding urban landscapes, accurately assessing spatial openness is critical. Traditional visibility analysis methods grapple with discretization errors and inefficiencies, creating a gap in truly capturing the human experi-ence of space. Addressing these gaps, this paper introduces a distinct continuous visibility algorithm, a leap in measuring urban spaces from a human-centric per-spective. This study presents a methodological breakthrough by applying this algorithm to urban visibility analysis. Unlike conventional approaches, this tech-nique allows for a continuous range of visibility assessment, closely mirroring hu-man visual perception. By eliminating the need for predefined subdivisions in ray casting, it offers a more accurate and efficient tool for urban planners and architects. The proposed algorithm not only reduces computational errors but also demonstrates faster processing capabilities, validated through a case study in Bei-jing's urban setting. Its key distinction lies in its potential to benefit a broad spec-trum of stakeholders, ranging from urban developers to public policymakers, aid-ing in the creation of urban spaces that prioritize visual openness and quality of life. This advancement in urban analysis methods could lead to more inclusive, comfortable, and well-integrated urban environments, enhancing the spatial experience for communities worldwide.

Keywords: visual openness, spatial continuity, ray-tracing algorithms, urban computation

Procedia PDF Downloads 46
714 Numerical Simulation of the Effect of Single and Dual Synthetic Jet on Stall Phenomenon On NACA (National Advisory Committee for Aeronautics) GA(W)-2 Airfoil

Authors: Abbasali Abouei Mehrizi, Hamid Hassanzadeh Afrouzi

Abstract:

Reducing the drag force increases the efficiency of the aircraft and its better performance. Flow control methods delay the phenomenon of flow separation and consequently reduce the reversed flow phenomenon in the separation region and enhance the performance of the lift force while decreasing the drag force and thus improving the aircraft efficiency. Flow control methods can be divided into active and passive types. The use of synthetic jets actuator (SJA) used in this study for NACA GA (W) -2 airfoil is one of the active flow control methods to prevent stall phenomenon on the airfoil. In this research, the relevant airfoil in different angles of attack with and without jets has been compared by OpenFOAM. Also, after achieving the proper SJA position on the airfoil suction surface, the simultaneous effect of two SJAs has been discussed. It was found to have the best effect at 12% chord (C), close to the airfoil’s leading edge (LE). At 12% chord, SJA decreases the drag significantly with increasing lift, and also, the average lift increase was higher than other situations and was equal to 10.4%. The highest drag reduction was about 5% in SJA=0.25C. Then, due to the positive effects of SJA in the 12% and 25% chord regions, these regions were considered for applying dual jets in two post-stall angles of attack, i.e., 16° and 22°.

Keywords: active and passive flow control methods, computational fluid dynamics, flow separation, synthetic jet

Procedia PDF Downloads 83
713 Automatic Fluid-Structure Interaction Modeling and Analysis of Butterfly Valve Using Python Script

Authors: N. Guru Prasath, Sangjin Ma, Chang-Wan Kim

Abstract:

A butterfly valve is a quarter turn valve which is used to control the flow of a fluid through a section of pipe. Generally, butterfly valve is used in wide range of applications such as water distribution, sewage, oil and gas plants. In particular, butterfly valve with larger diameter finds its immense applications in hydro power plants to control the fluid flow. In-lieu with the constraints in cost and size to run laboratory setup, analysis of large diameter values will be mostly studied by computational method which is the best and inexpensive solution. For fluid and structural analysis, CFD and FEM software is used to perform large scale valve analyses, respectively. In order to perform above analysis in butterfly valve, the CAD model has to recreate and perform mesh in conventional software’s for various dimensions of valve. Therefore, its limitation is time consuming process. In-order to overcome that issue, python code was created to outcome complete pre-processing setup automatically in Salome software. Applying dimensions of the model clearly in the python code makes the running time comparatively lower and easier way to perform analysis of the valve. Hence, in this paper, an attempt was made to study the fluid-structure interaction (FSI) of butterfly valves by varying the valve angles and dimensions using python code in pre-processing software, and results are produced.

Keywords: butterfly valve, flow coefficient, automatic CFD analysis, FSI analysis

Procedia PDF Downloads 241
712 Structural Performance Evaluation of Electronic Road Sign Panels Reflecting Damage Scenarios

Authors: Junwon Seo, Bipin Adhikari, Euiseok Jeong

Abstract:

This paper is intended to evaluate the structural performance of welded electronic road signs under various damage scenarios (DSs) using a finite element (FE) model calibrated with full-scale ultimate load testing results. The tested electronic road sign specimen was built with a back skin made of 5052 aluminum and two channels and a frame made of 6061 aluminum, where the back skin was connected to the frame by welding. The size of the tested specimen was 1.52 m long, 1.43 m wide, and 0.28 m deep. An actuator applied vertical loads at the center of the back skin of the specimen, resulting in a displacement of 158.7 mm and an ultimate load of 153.46 kN. Using these testing data, generation and calibration of a FE model of the tested specimen were executed in ABAQUS, indicating that the difference in the ultimate load between the calibrated model simulation and full-scale testing was only 3.32%. Then, six different DSs were simulated where the areas of the welded connection in the calibrated model were diminished for the DSs. It was found that the corners at the back skin-frame joint were prone to connection failure for all the DSs, and failure of the back skin-frame connection occurred remarkably from the distant edges.

Keywords: computational analysis, damage scenarios, electronic road signs, finite element, welded connections

Procedia PDF Downloads 92
711 Iris Feature Extraction and Recognition Based on Two-Dimensional Gabor Wavelength Transform

Authors: Bamidele Samson Alobalorun, Ifedotun Roseline Idowu

Abstract:

Biometrics technologies apply the human body parts for their unique and reliable identification based on physiological traits. The iris recognition system is a biometric–based method for identification. The human iris has some discriminating characteristics which provide efficiency to the method. In order to achieve this efficiency, there is a need for feature extraction of the distinct features from the human iris in order to generate accurate authentication of persons. In this study, an approach for an iris recognition system using 2D Gabor for feature extraction is applied to iris templates. The 2D Gabor filter formulated the patterns that were used for training and equally sent to the hamming distance matching technique for recognition. A comparison of results is presented using two iris image subjects of different matching indices of 1,2,3,4,5 filter based on the CASIA iris image database. By comparing the two subject results, the actual computational time of the developed models, which is measured in terms of training and average testing time in processing the hamming distance classifier, is found with best recognition accuracy of 96.11% after capturing the iris localization or segmentation using the Daughman’s Integro-differential, the normalization is confined to the Daugman’s rubber sheet model.

Keywords: Daugman rubber sheet, feature extraction, Hamming distance, iris recognition system, 2D Gabor wavelet transform

Procedia PDF Downloads 65
710 Digital Twins: Towards an Overarching Framework for the Built Environment

Authors: Astrid Bagireanu, Julio Bros-Williamson, Mila Duncheva, John Currie

Abstract:

Digital Twins (DTs) have entered the built environment from more established industries like aviation and manufacturing, although there has never been a common goal for utilising DTs at scale. Defined as the cyber-physical integration of data between an asset and its virtual counterpart, DT has been identified in literature from an operational standpoint – in addition to monitoring the performance of a built asset. However, this has never been translated into how DTs should be implemented into a project and what responsibilities each project stakeholder holds in the realisation of a DT. What is needed is an approach to translate these requirements into actionable DT dimensions. This paper presents a foundation for an overarching framework specific to the built environment. For the purposes of this research, the UK widely used the Royal Institute of British Architects (RIBA) Plan of Work from 2020 is used as a basis for itemising project stages. The RIBA Plan of Work consists of eight stages designed to inform on the definition, briefing, design, coordination, construction, handover, and use of a built asset. Similar project stages are utilised in other countries; therefore, the recommendations from the interviews presented in this paper are applicable internationally. Simultaneously, there is not a single mainstream software resource that leverages DT abilities. This ambiguity meets an unparalleled ambition from governments and industries worldwide to achieve a national grid of interconnected DTs. For the construction industry to access these benefits, it necessitates a defined starting point. This research aims to provide a comprehensive understanding of the potential applications and ramifications of DT in the context of the built environment. This paper is an integral part of a larger research aimed at developing a conceptual framework for the Architecture, Engineering, and Construction (AEC) sector following a conventional project timeline. Therefore, this paper plays a pivotal role in providing practical insights and a tangible foundation for developing a stage-by-stage approach to assimilate the potential of DT within the built environment. First, the research focuses on a review of relevant literature, albeit acknowledging the inherent constraint of limited sources available. Secondly, a qualitative study compiling the views of 14 DT experts is presented, concluding with an inductive analysis of the interview findings - ultimately highlighting the barriers and strengths of DT in the context of framework development. As parallel developments aim to progress net-zero-centred design and improve project efficiencies across the built environment, the limited resources available to support DTs should be leveraged to propel the industry to reach its digitalisation era, in which AEC stakeholders have a fundamental role in understanding this, from the earliest stages of a project.

Keywords: digital twins, decision-making, design, net-zero, built environment

Procedia PDF Downloads 122
709 Molecular Interactions Driving RNA Binding to hnRNPA1 Implicated in Neurodegeneration

Authors: Sakina Fatima, Joseph-Patrick W. E. Clarke, Patricia A. Thibault, Subha Kalyaanamoorthy, Michael Levin, Aravindhan Ganesan

Abstract:

Heteronuclear ribonucleoprotein (hnRNPA1 or A1) is associated with the pathology of different diseases, including neurological disorders and cancers. In particular, the aggregation and dysfunction of A1 have been identified as a critical driver for neurodegeneration (NDG) in Multiple Sclerosis (MS). Structurally, A1 includes a low-complexity domain (LCD) and two RNA-recognition motifs (RRMs), and their interdomain coordination may play a crucial role in A1 aggregation. Previous studies propose that RNA-inhibitors or nucleoside analogs that bind to RRMs can potentially prevent A1 self-association. Therefore, molecular-level understanding of the structures, dynamics, and nucleotide interactions with A1 RRMs can be useful for developing therapeutics for NDG in MS. In this work, a combination of computational modelling and biochemical experiments were employed to analyze a set of RNA-A1 RRM complexes. Initially, the atomistic models of RNA-RRM complexes were constructed by modifying known crystal structures (e.g., PDBs: 4YOE and 5MPG), and through molecular docking calculations. The complexes were optimized using molecular dynamics simulations (200-400 ns), and their binding free energies were computed. The binding affinities of the selected complexes were validated using a thermal shift assay. Further, the most important molecular interactions that contributed to the overall stability of the RNA-A1 RRM complexes were deduced. The results highlight that adenine and guanine are the most suitable nucleotides for high-affinity binding with A1. These insights will be useful in the rational design of nucleotide-analogs for targeting A1 RRMs.

Keywords: hnRNPA1, molecular docking, molecular dynamics, RNA-binding proteins

Procedia PDF Downloads 119
708 A Finite Element Based Predictive Stone Lofting Simulation Methodology for Automotive Vehicles

Authors: Gaurav Bisht, Rahul Rathnakumar, Ravikumar Duggirala

Abstract:

Predictive simulations are one of the key focus areas in safety-critical industries such as aerospace and high-performance automotive engineering. The stone-chipping study is one such effort taken up by the industry to predict and evaluate the damage caused due to gravel impact on vehicles. This paper describes a finite elements based method that can simulate the ejection of gravel chips from a vehicle tire. The FE simulations were used to obtain the initial ejection velocity of the stones for various driving conditions using a computational contact mechanics approach. To verify the accuracy of the tire model, several parametric studies were conducted. The FE simulations resulted in stone loft velocities ranging from 0–8 m/s, regardless of tire speed. The stress on the tire at the instant of initial contact with the stone increased linearly with vehicle speed. Mesh convergence studies indicated that a highly resolved tire mesh tends to result in better momentum transfer between the tire and the stone. A fine tire mesh also showed a linearly increasing relationship between the tire forward speed and stone lofting speed, which was not observed in coarser meshes. However, it also highlighted a potential challenge, in that the ejection velocity vector of the stone seemed to be sensitive to the mesh, owing to the FE-based contact mechanical formulation of the problem.

Keywords: abaqus, contact mechanics, foreign object debris, stone chipping

Procedia PDF Downloads 263
707 Assessment of the Effect of Building Materials on Energy Demand of Buildings in Jos: An Experimental and Numerical Approach

Authors: Zwalnan Selfa Johnson, Caleb Nanchen Nimyel, Gideon Duvuna Ayuba

Abstract:

Air conditioning accounts for a significant share of the overall energy consumed in residential buildings. Solar thermal gains in buildings account for a significant component of the air conditioning load in buildings. This study compares the solar thermal gain and air conditioning load of a proposed building design with a typical conventional building in the climatic conditions of Jos, Nigeria, using a combined experimental and computational method using TRNSYS software. According to the findings of this study, the proposed design building's annual average solar thermal gains are lower compared to the reference building's average solar heat gains. The study case building's decreased solar heat gain is mostly attributable to the lower temperature of the building zones because of the greater building volume and lower fenestration ratio (ratio external opening area to the area of the external walls). This result shows that the proposed building design adjusts to the local climate better than the standard conventional construction in Jos to maintain a suitable temperature within the building. This finding means that the air-conditioning electrical energy consumption per volume of the proposed building design will be lower than that of a conventional building design.

Keywords: solar heat gain, building zone, cooling energy, air conditioning, zone temperature

Procedia PDF Downloads 93
706 Movie Genre Preference Prediction Using Machine Learning for Customer-Based Information

Authors: Haifeng Wang, Haili Zhang

Abstract:

Most movie recommendation systems have been developed for customers to find items of interest. This work introduces a predictive model usable by small and medium-sized enterprises (SMEs) who are in need of a data-based and analytical approach to stock proper movies for local audiences and retain more customers. We used classification models to extract features from thousands of customers’ demographic, behavioral and social information to predict their movie genre preference. In the implementation, a Gaussian kernel support vector machine (SVM) classification model and a logistic regression model were established to extract features from sample data and their test error-in-sample were compared. Comparison of error-out-sample was also made under different Vapnik–Chervonenkis (VC) dimensions in the machine learning algorithm to find and prevent overfitting. Gaussian kernel SVM prediction model can correctly predict movie genre preferences in 85% of positive cases. The accuracy of the algorithm increased to 93% with a smaller VC dimension and less overfitting. These findings advance our understanding of how to use machine learning approach to predict customers’ preferences with a small data set and design prediction tools for these enterprises.

Keywords: computational social science, movie preference, machine learning, SVM

Procedia PDF Downloads 260
705 Impacts on the Modification of a Two-Blade Mobile on the Agitation of Newtonian Fluids

Authors: Abderrahim Sidi Mohammed Nekrouf, Sarra Youcefi

Abstract:

Fluid mixing plays a crucial role in numerous industries as it has a significant impact on the final product quality and performance. In certain cases, the circulation of viscous fluids presents challenges, leading to the formation of stagnant zones. To overcome this issue, stirring devices are employed for fluid mixing. This study focuses on a numerical analysis aimed at understanding the behavior of Newtonian fluids when agitated by a two-blade agitator in a cylindrical vessel. We investigate the influence of the agitator shape on fluid motion. Bi-blade agitators of this type are commonly used in the food, cosmetic, and chemical industries to agitate both viscous and non-viscous liquids. Numerical simulations were conducted using Computational Fluid Dynamics (CFD) software to obtain velocity profiles, streamlines, velocity contours, and the associated power number. The obtained results were compared with experimental data available in the literature, validating the accuracy of our numerical approach. The results clearly demonstrate that modifying the agitator shape has a significant impact on fluid motion. This modification generates an axial flow that enhances the efficiency of the fluid flow. The various velocity results convincingly reveal that the fluid is more uniformly agitated with this modification, resulting in improved circulation and a substantial reduction in stagnant zones.

Keywords: Newtonian fluids, numerical modeling, two blade., CFD

Procedia PDF Downloads 78
704 A Posteriori Trading-Inspired Model-Free Time Series Segmentation

Authors: Plessen Mogens Graf

Abstract:

Within the context of multivariate time series segmentation, this paper proposes a method inspired by a posteriori optimal trading. After a normalization step, time series are treated channelwise as surrogate stock prices that can be traded optimally a posteriori in a virtual portfolio holding either stock or cash. Linear transaction costs are interpreted as hyperparameters for noise filtering. Trading signals, as well as trading signals obtained on the reversed time series, are used for unsupervised channelwise labeling before a consensus over all channels is reached that determines the final segmentation time instants. The method is model-free such that no model prescriptions for segments are made. Benefits of proposed approach include simplicity, computational efficiency, and adaptability to a wide range of different shapes of time series. Performance is demonstrated on synthetic and real-world data, including a large-scale dataset comprising a multivariate time series of dimension 1000 and length 2709. Proposed method is compared to a popular model-based bottom-up approach fitting piecewise affine models and to a recent model-based top-down approach fitting Gaussian models and found to be consistently faster while producing more intuitive results in the sense of segmenting time series at peaks and valleys.

Keywords: time series segmentation, model-free, trading-inspired, multivariate data

Procedia PDF Downloads 136
703 Financial Modeling for Net Present Benefit Analysis of Electric Bus and Diesel Bus and Applications to NYC, LA, and Chicago

Authors: Jollen Dai, Truman You, Xinyun Du, Katrina Liu

Abstract:

Transportation is one of the leading sources of greenhouse gas emissions (GHG). Thus, to meet the Paris Agreement 2015, all countries must adopt a different and more sustainable transportation system. From bikes to Maglev, the world is slowly shifting to sustainable transportation. To develop a utility public transit system, a sustainable web of buses must be implemented. As of now, only a handful of cities have adopted a detailed plan to implement a full fleet of e-buses by the 2030s, with Shenzhen in the lead. Every change requires a detailed plan and a focused analysis of the impacts of the change. In this report, the economic implications and financial implications have been taken into consideration to develop a well-rounded 10-year plan for New York City. We also apply the same financial model to the other cities, LA and Chicago. We picked NYC, Chicago, and LA to conduct the comparative NPB analysis since they are all big metropolitan cities and have complex transportation systems. All three cities have started an action plan to achieve a full fleet of e-bus in the decades. Plus, their energy carbon footprint and their energy price are very different, which are the key factors to the benefits of electric buses. Using TCO (Total Cost Ownership) financial analysis, we developed a model to calculate NPB (Net Present Benefit) /and compare EBS (electric buses) to DBS (diesel buses). We have considered all essential aspects in our model: initial investment, including the cost of a bus, charger, and installation, government fund (federal, state, local), labor cost, energy (electricity or diesel) cost, maintenance cost, insurance cost, health and environment benefit, and V2G (vehicle to grid) benefit. We see about $1,400,000 in benefits for a 12-year lifetime of an EBS compared to DBS provided the government fund to offset 50% of EBS purchase cost. With the government subsidy, an EBS starts to make positive cash flow in 5th year and can pay back its investment in 5 years. Please remember that in our model, we consider environmental and health benefits, and every year, $50,000 is counted as health benefits per bus. Besides health benefits, the significant benefits come from the energy cost savings and maintenance savings, which are about $600,000 and $200,000 in 12-year life cycle. Using linear regression, given certain budget limitations, we then designed an optimal three-phase process to replace all NYC electric buses in 10 years, i.e., by 2033. The linear regression process is to minimize the total cost over the years and have the lowest environmental cost. The overall benefits to replace all DBS with EBS for NYC is over $2.1 billion by the year of 2033. For LA, and Chicago, the benefits for electrification of the current bus fleet are $1.04 billion and $634 million by 2033. All NPB analyses and the algorithm to optimize the electrification phase process are implemented in Python code and can be shared.

Keywords: financial modeling, total cost ownership, net present benefits, electric bus, diesel bus, NYC, LA, Chicago

Procedia PDF Downloads 50
702 Antidiabetic and Admet Pharmacokinetic Properties of Grewia Lasiocarpa E. Mey. Ex Harv. Stem Bark Extracts: An in Vitro and in Silico Study

Authors: Akwu N. A., Naidoo Y., Salau V. F., Olofinsan K. A.

Abstract:

Grewia lasiocarpa E. Mey. ex Harv. (Malvaceae) is a Southern African medicinal plant indigenously used with other plants for birthing problems. The anti-diabetic properties of the hexane, chloroform, and methanol extracts of Grewia lasiocarpa stem bark were assessed using in vitro α-glucosidase enzyme inhibition assay. The predictive in silico drug-likeness and toxicity properties of the phytocompounds were conducted using the pKCSM, ADMElab, and SwissADME computer-aided online tools. The highest α-glucosidase percentage inhibition was observed in the hexane extract (86.76%, IC50= 0.24 mg/mL), followed by chloroform (63.08%, IC50= 4.87 mg/mL) and methanol (53.22%, IC50= 9.41 mg/mL); while acarbose, the standard anti-diabetic drug was (84.54%, IC50= 1.96 mg/mL). The α-glucosidase assay revealed that the hexane extract exhibited the strongest carbohydrate inhibiting capacity and is a better inhibitor than the standard reference drug-acarbose. The computational studies also affirm the results observed in the in vitroα-glucosidaseassay. Thus, the extracts of G. lasiocarpa may be considered a potential plant-sourced compound for treating type 2 diabetes mellitus. This is the first study on the anti-diabetic properties of Grewia lasiocarpa hexane, chloroform, and methanol extracts using in vitro and in silico models.

Keywords: grewia lasiocarpa, α-glucosidase inhibition, anti-diabetes, ADMET

Procedia PDF Downloads 104
701 Automatic Tuning for a Systemic Model of Banking Originated Losses (SYMBOL) Tool on Multicore

Authors: Ronal Muresano, Andrea Pagano

Abstract:

Nowadays, the mathematical/statistical applications are developed with more complexity and accuracy. However, these precisions and complexities have brought as result that applications need more computational power in order to be executed faster. In this sense, the multicore environments are playing an important role to improve and to optimize the execution time of these applications. These environments allow us the inclusion of more parallelism inside the node. However, to take advantage of this parallelism is not an easy task, because we have to deal with some problems such as: cores communications, data locality, memory sizes (cache and RAM), synchronizations, data dependencies on the model, etc. These issues are becoming more important when we wish to improve the application’s performance and scalability. Hence, this paper describes an optimization method developed for Systemic Model of Banking Originated Losses (SYMBOL) tool developed by the European Commission, which is based on analyzing the application's weakness in order to exploit the advantages of the multicore. All these improvements are done in an automatic and transparent manner with the aim of improving the performance metrics of our tool. Finally, experimental evaluations show the effectiveness of our new optimized version, in which we have achieved a considerable improvement on the execution time. The time has been reduced around 96% for the best case tested, between the original serial version and the automatic parallel version.

Keywords: algorithm optimization, bank failures, OpenMP, parallel techniques, statistical tool

Procedia PDF Downloads 369
700 Off-Policy Q-learning Technique for Intrusion Response in Network Security

Authors: Zheni S. Stefanova, Kandethody M. Ramachandran

Abstract:

With the increasing dependency on our computer devices, we face the necessity of adequate, efficient and effective mechanisms, for protecting our network. There are two main problems that Intrusion Detection Systems (IDS) attempt to solve. 1) To detect the attack, by analyzing the incoming traffic and inspect the network (intrusion detection). 2) To produce a prompt response when the attack occurs (intrusion prevention). It is critical creating an Intrusion detection model that will detect a breach in the system on time and also challenging making it provide an automatic and with an acceptable delay response at every single stage of the monitoring process. We cannot afford to adopt security measures with a high exploiting computational power, and we are not able to accept a mechanism that will react with a delay. In this paper, we will propose an intrusion response mechanism that is based on artificial intelligence, and more precisely, reinforcement learning techniques (RLT). The RLT will help us to create a decision agent, who will control the process of interacting with the undetermined environment. The goal is to find an optimal policy, which will represent the intrusion response, therefore, to solve the Reinforcement learning problem, using a Q-learning approach. Our agent will produce an optimal immediate response, in the process of evaluating the network traffic.This Q-learning approach will establish the balance between exploration and exploitation and provide a unique, self-learning and strategic artificial intelligence response mechanism for IDS.

Keywords: cyber security, intrusion prevention, optimal policy, Q-learning

Procedia PDF Downloads 236
699 Power Allocation Algorithm for Orthogonal Frequency Division Multiplexing Based Cognitive Radio Networks

Authors: Bircan Demiral

Abstract:

Cognitive radio (CR) is the promising technology that addresses the spectrum scarcity problem for future wireless communications. Orthogonal Frequency Division Multiplexing (OFDM) technology provides more power band ratios for cognitive radio networks (CRNs). While CR is a solution to the spectrum scarcity, it also brings up the capacity problem. In this paper, a novel power allocation algorithm that aims at maximizing the sum capacity in the OFDM based cognitive radio networks is proposed. Proposed allocation algorithm is based on the previously developed water-filling algorithm. To reduce the computational complexity calculating in water filling algorithm, proposed algorithm allocates the total power according to each subcarrier. The power allocated to the subcarriers increases sum capacity. To see this increase, Matlab program was used, and the proposed power allocation was compared with average power allocation, water filling and general power allocation algorithms. The water filling algorithm performed worse than the proposed algorithm while it performed better than the other two algorithms. The proposed algorithm is better than other algorithms in terms of capacity increase. In addition the effect of the change in the number of subcarriers on capacity was discussed. Simulation results show that the increase in the number of subcarrier increases the capacity.

Keywords: cognitive radio network, OFDM, power allocation, water filling

Procedia PDF Downloads 137
698 A Comparative Study of Motion Events Encoding in English and Italian

Authors: Alfonsina Buoniconto

Abstract:

The aim of this study is to investigate the degree of cross-linguistic and intra-linguistic variation in the encoding of motion events (MEs) in English and Italian, these being typologically different languages both showing signs of disobedience to their respective types. As a matter of fact, the traditional typological classification of MEs encoding distributes languages into two macro-types, based on the preferred locus for the expression of Path, the main ME component (other components being Figure, Ground and Manner) characterized by conceptual and structural prominence. According to this model, Satellite-framed (SF) languages typically express Path information in verb-dependent items called satellites (e.g. preverbs and verb particles) with main verbs encoding Manner of motion; whereas Verb-framed languages (VF) tend to include Path information within the verbal locus, leaving Manner to adjuncts. Although this dichotomy is valid altogether, languages do not always behave according to their typical classification patterns. English, for example, is usually ascribed to the SF type due to the rich inventory of postverbal particles and phrasal verbs used to express spatial relations (i.e. the cat climbed down the tree); nevertheless, it is not uncommon to find constructions such as the fog descended slowly, which is typical of the VF type. Conversely, Italian is usually described as being VF (cf. Paolo uscì di corsa ‘Paolo went out running’), yet SF constructions like corse via in lacrime ‘She ran away in tears’ are also frequent. This paper will try to demonstrate that such a typological overlapping is due to the fact that the semantic units making up MEs are distributed within several loci of the sentence –not only verbs and satellites– thus determining a number of different constructions stemming from convergent factors. Indeed, the linguistic expression of motion events depends not only on the typological nature of languages in a traditional sense, but also on a series morphological, lexical, and syntactic resources, as well as on inferential, discursive, usage-related, and cultural factors that make semantic information more or less accessible, frequent, and easy to process. Hence, rather than describe English and Italian in dichotomic terms, this study focuses on the investigation of cross-linguistic and intra-linguistic variation in the use of all the strategies made available by each linguistic system to express motion. Evidence for these assumptions is provided by parallel corpora analysis. The sample texts are taken from two contemporary Italian novels and their respective English translations. The 400 motion occurrences selected (200 in English and 200 in Italian) were scanned according to the MODEG (an acronym for Motion Decoding Grid) methodology, which grants data comparability through the indexation and retrieval of combined morphosyntactic and semantic information at different levels of detail.

Keywords: construction typology, motion event encoding, parallel corpora, satellite-framed vs. verb-framed type

Procedia PDF Downloads 260
697 Object-Based Flow Physics for Aerodynamic Modelling in Real-Time Environments

Authors: William J. Crowther, Conor Marsh

Abstract:

Object-based flow simulation allows fast computation of arbitrarily complex aerodynamic models made up of simple objects with limited flow interactions. The proposed approach is universally applicable to objects made from arbitrarily scaled ellipsoid primitives at arbitrary aerodynamic attitude and angular rate. The use of a component-based aerodynamic modelling approach increases efficiency by allowing selective inclusion of different physics models at run-time and allows extensibility through the development of new models. Insight into the numerical stability of the model under first order fixed-time step integration schemes is provided by stability analysis of the drag component. The compute cost of model components and functions is evaluated and compared against numerical benchmarks. Model static outputs are verified against theoretical expectations and dynamic behaviour using falling plate data from the literature. The model is applied to a range of case studies to demonstrate the efficacy of its application in extensibility, ease of use, and low computational cost. Dynamically complex multi-body systems can be implemented in a transparent and efficient manner, and we successfully demonstrate large scenes with hundreds of objects interacting with diverse flow fields.

Keywords: aerodynamics, real-time simulation, low-order model, flight dynamics

Procedia PDF Downloads 102
696 Experimental and Computational Analysis of Glass Fiber Reinforced Plastic Beams with Piezoelectric Fibers

Authors: Selin Kunc, Srinivas Koushik Gundimeda, John A. Gallagher, Roselita Fragoudakis

Abstract:

This study investigates the behavior of Glass Fiber Reinforced Plastic (GFRP) laminated beams additionally reinforced with piezoelectric fibers. The electromechanical behavior of piezoelectric materials coupled with high strength/low weight GFRP laminated beams can have significant application in a wide range of industries. Energy scavenging through mechanical vibrations is the focus of this study, and possible applications can be seen in the automotive industry. This study examines the behavior of such composite laminates using Classical Lamination Theory (CLT) under three-point bending conditions. Fiber orientation is optimized for the desired stiffness and deflection that yield maximum energy output. Finite element models using ABAQUS/CAE are verified through experimental testing. The optimum stacking sequences examined are [0o]s, [ 0/45o]s, and [45/-45o]s. Results show the superiority of the stacking sequence [0/45o]s, providing higher strength at a lower weight, and maximum energy output. Furthermore, laminated GFRP beams additionally reinforced with piezoelectric fibers can be used under bending to not only replace metallic component while providing similar strength at a lower weight but also provide an energy output.

Keywords: classical lamination theory (CLT), energy scavenging, glass fiber reinforced plastics (GFRP), piezoelectric fibers

Procedia PDF Downloads 306
695 Modified Model for UV-Laser Corneal Ablation

Authors: Salah Hassab Elnaby, Omnia Hamdy, Aziza Ahmed Hassan, Salwa Abdelkawi, Ibrahim Abdelhalim

Abstract:

Laser corneal reshaping has been proposed as a successful treatment of many refraction disorders. However, some physical and chemical demonstrations of the laser effect upon interaction with the corneal tissue are still not fully explained. Therefore, different computational and mathematical models have been implemented to predict the depth of the ablated channel and calculate the ablation threshold and the local temperature rise. In the current paper, we present a modified model that aims to answer some of the open questions about the ablation threshold, the ablation rate, and the physical and chemical mechanisms of that action. The proposed model consists of three parts. The first part deals with possible photochemical reactions between the incident photons and various components of the cornea (collagen, water, etc.). Such photochemical reactions may end by photo-ablation or just the electronic excitation of molecules. Then a chemical reaction is responsible for the ablation threshold. Finally, another chemical reaction produces fragments that can be cleared out. The model takes into account all processes at the same time with different probabilities. Moreover, the effect of applying different laser wavelengths that have been studied before, namely the common excimer laser (193-nm) and the solid state lasers (213-nm & 266-nm), has been investigated. Despite the success and ubiquity of the ArF laser, the presented results reveal that a carefully designed 213-nm laser gives the same results with lower operational drawbacks. Moreover, the use of mode locked laser could also decrease the risk of heat generation and diffusion.

Keywords: UV lasers, mathematical model, corneal ablation, photochemical ablation

Procedia PDF Downloads 89
694 Intelligent Agent-Based Model for the 5G mmWave O2I Technology Adoption

Authors: Robert Joseph M. Licup

Abstract:

The deployment of the fifth-generation (5G) mobile system through mmWave frequencies is the new solution in the requirement to provide higher bandwidth readily available for all users. The usage pattern of the mobile users has moved towards either the work from home or online classes set-up because of the pandemic. Previous mobile technologies can no longer meet the high speed, and bandwidth requirement needed, given the drastic shift of transactions to the home. The millimeter-wave (mmWave) underutilized frequency is utilized by the fifth-generation (5G) cellular networks that support multi-gigabit-per-second (Gbps) transmission. However, due to its short wavelengths, high path loss, directivity, blockage sensitivity, and narrow beamwidth are some of the technical challenges that need to be addressed. Different tools, technologies, and scenarios are explored to support network design, accurate channel modeling, implementation, and deployment effectively. However, there is a big challenge on how the consumer will adopt this solution and maximize the benefits offered by the 5G Technology. This research proposes to study the intricacies of technology diffusion, individual attitude, behaviors, and how technology adoption will be attained. The agent based simulation model shaped by the actual applications, technology solution, and related literature was used to arrive at a computational model. The research examines the different attributes, factors, and intricacies that can affect each identified agent towards technology adoption.

Keywords: agent-based model, AnyLogic, 5G O21, 5G mmWave solutions, technology adoption

Procedia PDF Downloads 108
693 Algorithm for Quantification of Pulmonary Fibrosis in Chest X-Ray Exams

Authors: Marcela de Oliveira, Guilherme Giacomini, Allan Felipe Fattori Alves, Ana Luiza Menegatti Pavan, Maria Eugenia Dela Rosa, Fernando Antonio Bacchim Neto, Diana Rodrigues de Pina

Abstract:

It is estimated that each year one death every 10 seconds (about 2 million deaths) in the world is attributed to tuberculosis (TB). Even after effective treatment, TB leaves sequelae such as, for example, pulmonary fibrosis, compromising the quality of life of patients. Evaluations of the aforementioned sequel are usually performed subjectively by radiology specialists. Subjective evaluation may indicate variations inter and intra observers. The examination of x-rays is the diagnostic imaging method most accomplished in the monitoring of patients diagnosed with TB and of least cost to the institution. The application of computational algorithms is of utmost importance to make a more objective quantification of pulmonary impairment in individuals with tuberculosis. The purpose of this research is the use of computer algorithms to quantify the pulmonary impairment pre and post-treatment of patients with pulmonary TB. The x-ray images of 10 patients with TB diagnosis confirmed by examination of sputum smears were studied. Initially the segmentation of the total lung area was performed (posteroanterior and lateral views) then targeted to the compromised region by pulmonary sequel. Through morphological operators and the application of signal noise tool, it was possible to determine the compromised lung volume. The largest difference found pre- and post-treatment was 85.85% and the smallest was 54.08%.

Keywords: algorithm, radiology, tuberculosis, x-rays exam

Procedia PDF Downloads 419
692 Magnetron Sputtered Thin-Film Catalysts with Low Noble Metal Content for Proton Exchange Membrane Water Electrolysis

Authors: Peter Kus, Anna Ostroverkh, Yurii Yakovlev, Yevheniia Lobko, Roman Fiala, Ivan Khalakhan, Vladimir Matolin

Abstract:

Hydrogen economy is a concept of low-emission society which harvests most of its energy from renewable sources (e.g., wind and solar) and in case of overproduction, electrochemically turns the excess amount into hydrogen, which serves as an energy carrier. Proton exchange membrane water electrolyzers (PEMWE) are the backbone of this concept. By fast-response electricity to hydrogen conversion, the PEMWEs will not only stabilize the electrical grid but also provide high-purity hydrogen for variety of fuel cell powered devices, ranging from consumer electronics to vehicles. Wider commercialization of PEMWE technology is however hindered by high prices of noble metals which are necessary for catalyzing the redox reactions within the cell. Namely, platinum for hydrogen evolution reaction (HER), running on cathode, and iridium for oxygen evolution reaction (OER) on anode. Possible way of how to lower the loading of Pt and Ir is by using conductive high-surface nanostructures as catalyst supports in conjunction with thin-film catalyst deposition. The presented study discusses unconventional technique of membrane electron assembly (MEA) preparation. Noble metal catalysts (Pt and Ir) were magnetron sputtered in very low loadings onto the surface of porous sublayers (located on gas diffusion layer or directly on membrane), forming so to say localized three-phase boundary. Ultrasonically sprayed corrosion resistant TiC-based sublayer was used as a support material on anode, whereas magnetron sputtered nanostructured etched nitrogenated carbon (CNx) served the same role on cathode. By using this configuration, we were able to significantly decrease the amount of noble metals (to thickness of just tens of nanometers), while keeping the performance comparable to that of average state-of-the-art catalysts. Complex characterization of prepared supported catalysts includes in-cell performance and durability tests, electrochemical impedance spectroscopy (EIS) as well as scanning electron microscopy (SEM) imaging and X-ray photoelectron spectroscopy (XPS) analysis. Our research proves that magnetron sputtering is a suitable method for thin-film deposition of electrocatalysts. Tested set-up of thin-film supported anode and cathode catalysts with combined loading of just 120 ug.cm⁻² yields remarkable values of specific current. Described approach of thin-film low-loading catalyst deposition might be relevant when noble metal reduction is the topmost priority.

Keywords: hydrogen economy, low-loading catalyst, magnetron sputtering, proton exchange membrane water electrolyzer

Procedia PDF Downloads 163
691 Computer Simulation to Investigate Magnetic and Wave-Absorbing Properties of Iron Nanoparticles

Authors: Chuan-Wen Liu, Min-Hsien Liu, Chung-Chieh Tai, Bing-Cheng Kuo, Cheng-Lung Chen, Huazhen Shen

Abstract:

A recent surge in research on magnetic radar absorbing materials (RAMs) has presented researchers with new opportunities and challenges. This study was performed to gain a better understanding of the wave-absorbing phenomenon of magnetic RAMs. First, we hypothesized that the absorbing phenomenon is dependent on the particle shape. Using the Material Studio program and the micro-dot magnetic dipoles (MDMD) method, we obtained results from magnetic RAMs to support this hypothesis. The total MDMD energy of disk-like iron particles was greater than that of spherical iron particles. In addition, the particulate aggregation phenomenon decreases the wave-absorbance, according to both experiments and computational data. To conclude, this study may be of importance in terms of explaining the wave- absorbing characteristic of magnetic RAMs. Combining molecular dynamics simulation results and the theory of magnetization of magnetic dots, we investigated the magnetic properties of iron materials with different particle shapes and degrees of aggregation under external magnetic fields. The MDMD of the materials under magnetic fields of various strengths were simulated. Our results suggested that disk-like iron particles had a better magnetization than spherical iron particles. This result could be correlated with the magnetic wave- absorbing property of iron material.

Keywords: wave-absorbing property, magnetic material, micro-dot magnetic dipole, particulate aggregation

Procedia PDF Downloads 490
690 Accelerated Molecular Simulation: A Convolution Approach

Authors: Jannes Quer, Amir Niknejad, Marcus Weber

Abstract:

Computational Drug Design is often based on Molecular Dynamics simulations of molecular systems. Molecular Dynamics can be used to simulate, e.g., the binding and unbinding event of a small drug-like molecule with regard to the active site of an enzyme or a receptor. However, the time-scale of the overall binding event is many orders of magnitude longer than the time-scale of simulation. Thus, there is a need to speed-up molecular simulations. In order to speed up simulations, the molecular dynamics trajectories have to be ”steared” out of local minimizers of the potential energy surface – the so-called metastabilities – of the molecular system. Increasing the kinetic energy (temperature) is one possibility to accelerate simulated processes. However, with temperature the entropy of the molecular system increases, too. But this kind ”stearing” is not directed enough to stear the molecule out of the minimum toward the saddle point. In this article, we give a new mathematical idea, how a potential energy surface can be changed in such a way, that entropy is kept under control while the trajectories are still steared out of the metastabilities. In order to compute the unsteared transition behaviour based on a steared simulation, we propose to use extrapolation methods. In the end we mathematically show, that our method accelerates the simulations along the direction, in which the curvature of the potential energy surface changes the most, i.e., from local minimizers towards saddle points.

Keywords: extrapolation, Eyring-Kramers, metastability, multilevel sampling

Procedia PDF Downloads 328
689 An Unified Model for Longshore Sediment Transport Rate Estimation

Authors: Aleksandra Dudkowska, Gabriela Gic-Grusza

Abstract:

Wind wave-induced sediment transport is an important multidimensional and multiscale dynamic process affecting coastal seabed changes and coastline evolution. The knowledge about sediment transport rate is important to solve many environmental and geotechnical issues. There are many types of sediment transport models but none of them is widely accepted. It is bacause the process is not fully defined. Another problem is a lack of sufficient measurment data to verify proposed hypothesis. There are different types of models for longshore sediment transport (LST, which is discussed in this work) and cross-shore transport which is related to different time and space scales of the processes. There are models describing bed-load transport (discussed in this work), suspended and total sediment transport. LST models use among the others the information about (i) the flow velocity near the bottom, which in case of wave-currents interaction in coastal zone is a separate problem (ii) critical bed shear stress that strongly depends on the type of sediment and complicates in the case of heterogeneous sediment. Moreover, LST rate is strongly dependant on the local environmental conditions. To organize existing knowledge a series of sediment transport models intercomparisons was carried out as a part of the project “Development of a predictive model of morphodynamic changes in the coastal zone”. Four classical one-grid-point models were studied and intercompared over wide range of bottom shear stress conditions, corresponding with wind-waves conditions appropriate for coastal zone in polish marine areas. The set of models comprises classical theories that assume simplified influence of turbulence on the sediment transport (Du Boys, Meyer-Peter & Muller, Ribberink, Engelund & Hansen). It turned out that the values of estimated longshore instantaneous mass sediment transport are in general in agreement with earlier studies and measurements conducted in the area of interest. However, none of the formulas really stands out from the rest as being particularly suitable for the test location over the whole analyzed flow velocity range. Therefore, based on the models discussed a new unified formula for longshore sediment transport rate estimation is introduced, which constitutes the main original result of this study. Sediment transport rate is calculated based on the bed shear stress and critical bed shear stress. The dependence of environmental conditions is expressed by one coefficient (in a form of constant or function) thus the model presented can be quite easily adjusted to the local conditions. The discussion of the importance of each model parameter for specific velocity ranges is carried out. Moreover, it is shown that the value of near-bottom flow velocity is the main determinant of longshore bed-load in storm conditions. Thus, the accuracy of the results depends less on the sediment transport model itself and more on the appropriate modeling of the near-bottom velocities.

Keywords: bedload transport, longshore sediment transport, sediment transport models, coastal zone

Procedia PDF Downloads 387
688 Using Manipulating Urban Layouts to Enhance Ventilation and Thermal Comfort in Street Canyons

Authors: Su Ying-Ming

Abstract:

High density of high rise buildings in urban areas lead to a deteriorative Urban Heat Island Effect, gradually. This study focuses on discussing the relationship between urban layout and ventilation comfort in street canyons. This study takes Songjiang Nanjing Rd. area of Taipei, Taiwan as an example to evaluate the wind environment comfort index by field measurement and Computational Fluid Dynamics (CFD) to improve both the quality and quantity of the environment. In this study, different factors including street blocks size, the width of buildings, street width ratio and the direction of the wind were used to discuss the potential of ventilation. The environmental wind field was measured by the environmental testing equipment, Testo 480. Evaluation of blocks sizes, the width of buildings, street width ratio and the direction of the wind was made under the condition of constant floor area with the help of Stimulation CFD to adjust research methods for optimizing regional wind environment. The results of this study showed the width of buildings influences the efficiency of outdoor ventilation; improvement of the efficiency of ventilation with large street width was also shown. The study found that Block width and H/D value and PR value has a close relationship. Furthermore, this study showed a significant relationship between the alteration of street block geometry and outdoor comfortableness.

Keywords: urban ventilation path, ventilation efficiency indices, CFD, building layout

Procedia PDF Downloads 385
687 Bayesian Borrowing Methods for Count Data: Analysis of Incontinence Episodes in Patients with Overactive Bladder

Authors: Akalu Banbeta, Emmanuel Lesaffre, Reynaldo Martina, Joost Van Rosmalen

Abstract:

Including data from previous studies (historical data) in the analysis of the current study may reduce the sample size requirement and/or increase the power of analysis. The most common example is incorporating historical control data in the analysis of a current clinical trial. However, this only applies when the historical control dataare similar enough to the current control data. Recently, several Bayesian approaches for incorporating historical data have been proposed, such as the meta-analytic-predictive (MAP) prior and the modified power prior (MPP) both for single control as well as for multiple historical control arms. Here, we examine the performance of the MAP and the MPP approaches for the analysis of (over-dispersed) count data. To this end, we propose a computational method for the MPP approach for the Poisson and the negative binomial models. We conducted an extensive simulation study to assess the performance of Bayesian approaches. Additionally, we illustrate our approaches on an overactive bladder data set. For similar data across the control arms, the MPP approach outperformed the MAP approach with respect to thestatistical power. When the means across the control arms are different, the MPP yielded a slightly inflated type I error (TIE) rate, whereas the MAP did not. In contrast, when the dispersion parameters are different, the MAP gave an inflated TIE rate, whereas the MPP did not.We conclude that the MPP approach is more promising than the MAP approach for incorporating historical count data.

Keywords: count data, meta-analytic prior, negative binomial, poisson

Procedia PDF Downloads 117