Search results for: quantile function model
19645 Metabolic Predictive Model for PMV Control Based on Deep Learning
Authors: Eunji Choi, Borang Park, Youngjae Choi, Jinwoo Moon
Abstract:
In this study, a predictive model for estimating the metabolism (MET) of human body was developed for the optimal control of indoor thermal environment. Human body images for indoor activities and human body joint coordinated values were collected as data sets, which are used in predictive model. A deep learning algorithm was used in an initial model, and its number of hidden layers and hidden neurons were optimized. Lastly, the model prediction performance was analyzed after the model being trained through collected data. In conclusion, the possibility of MET prediction was confirmed, and the direction of the future study was proposed as developing various data and the predictive model.Keywords: deep learning, indoor quality, metabolism, predictive model
Procedia PDF Downloads 25719644 A Distribution Free Test for Censored Matched Pairs
Authors: Ayman Baklizi
Abstract:
This paper discusses the problem of testing hypotheses about the lifetime distributions of a matched pair based on censored data. A distribution free test based on a runs statistic is proposed. Its null distribution and power function are found in a simple convenient form. Some properties of the test statistic and its power function are studied.Keywords: censored data, distribution free, matched pair, runs statistics
Procedia PDF Downloads 28719643 Geometrical Analysis of an Atheroma Plaque in Left Anterior Descending Coronary Artery
Authors: Sohrab Jafarpour, Hamed Farokhi, Mohammad Rahmati, Alireza Gholipour
Abstract:
In the current study, a nonlinear fluid-structure interaction (FSI) biomechanical model of atherosclerosis in the left anterior descending (LAD) coronary artery is developed to perform a detailed sensitivity analysis of the geometrical features of an atheroma plaque. In the development of the numerical model, first, a 3D geometry of the diseased artery is developed based on patient-specific dimensions obtained from the experimental studies. The geometry includes four influential geometric characteristics: stenosis ratio, plaque shoulder-length, fibrous cap thickness, and eccentricity intensity. Then, a suitable strain energy density function (SEDF) is proposed based on the detailed material stability analysis to accurately model the hyperelasticity of the arterial walls. The time-varying inlet velocity and outlet pressure profiles are adopted from experimental measurements to incorporate the pulsatile nature of the blood flow. In addition, a computationally efficient type of structural boundary condition is imposed on the arterial walls. Finally, a non-Newtonian viscosity model is implemented to model the shear-thinning behaviour of the blood flow. According to the results, the structural responses in terms of the maximum principal stress (MPS) are affected more compared to the fluid responses in terms of wall shear stress (WSS) as the geometrical characteristics are varying. The extent of these changes is critical in the vulnerability assessment of an atheroma plaque.Keywords: atherosclerosis, fluid-Structure interaction modeling, material stability analysis, and nonlinear biomechanics
Procedia PDF Downloads 8819642 Model Updating-Based Approach for Damage Prognosis in Frames via Modal Residual Force
Authors: Gholamreza Ghodrati Amiri, Mojtaba Jafarian Abyaneh, Ali Zare Hosseinzadeh
Abstract:
This paper presents an effective model updating strategy for damage localization and quantification in frames by defining damage detection problem as an optimization issue. A generalized version of the Modal Residual Force (MRF) is employed for presenting a new damage-sensitive cost function. Then, Grey Wolf Optimization (GWO) algorithm is utilized for solving suggested inverse problem and the global extremums are reported as damage detection results. The applicability of the presented method is investigated by studying different damage patterns on the benchmark problem of the IASC-ASCE, as well as a planar shear frame structure. The obtained results emphasize good performance of the method not only in free-noise cases, but also when the input data are contaminated with different levels of noises.Keywords: frame, grey wolf optimization algorithm, modal residual force, structural damage detection
Procedia PDF Downloads 38919641 A Damage-Plasticity Concrete Model for Damage Modeling of Reinforced Concrete Structures
Authors: Thanh N. Do
Abstract:
This paper addresses the modeling of two critical behaviors of concrete material in reinforced concrete components: (1) the increase in strength and ductility due to confining stresses from surrounding transverse steel reinforcements, and (2) the progressive deterioration in strength and stiffness due to high strain and/or cyclic loading. To improve the state-of-the-art, the author presents a new 3D constitutive model of concrete material based on plasticity and continuum damage mechanics theory to simulate both the confinement effect and the strength deterioration in reinforced concrete components. The model defines a yield function of the stress invariants and a compressive damage threshold based on the level of confining stresses to automatically capture the increase in strength and ductility when subjected to high compressive stresses. The model introduces two damage variables to describe the strength and stiffness deterioration under tensile and compressive stress states. The damage formulation characterizes well the degrading behavior of concrete material, including the nonsymmetric strength softening in tension and compression, as well as the progressive strength and stiffness degradation under primary and follower load cycles. The proposed damage model is implemented in a general purpose finite element analysis program allowing an extensive set of numerical simulations to assess its ability to capture the confinement effect and the degradation of the load-carrying capacity and stiffness of structural elements. It is validated against a collection of experimental data of the hysteretic behavior of reinforced concrete columns and shear walls under different load histories. These correlation studies demonstrate the ability of the model to describe vastly different hysteretic behaviors with a relatively consistent set of parameters. The model shows excellent consistency in response determination with very good accuracy. Its numerical robustness and computational efficiency are also very good and will be further assessed with large-scale simulations of structural systems.Keywords: concrete, damage-plasticity, shear wall, confinement
Procedia PDF Downloads 16919640 Modeling of a UAV Longitudinal Dynamics through System Identification Technique
Authors: Asadullah I. Qazi, Mansoor Ahsan, Zahir Ashraf, Uzair Ahmad
Abstract:
System identification of an Unmanned Aerial Vehicle (UAV), to acquire its mathematical model, is a significant step in the process of aircraft flight automation. The need for reliable mathematical model is an established requirement for autopilot design, flight simulator development, aircraft performance appraisal, analysis of aircraft modifications, preflight testing of prototype aircraft and investigation of fatigue life and stress distribution etc. This research is aimed at system identification of a fixed wing UAV by means of specifically designed flight experiment. The purposely designed flight maneuvers were performed on the UAV and aircraft states were recorded during these flights. Acquired data were preprocessed for noise filtering and bias removal followed by parameter estimation of longitudinal dynamics transfer functions using MATLAB system identification toolbox. Black box identification based transfer function models, in response to elevator and throttle inputs, were estimated using least square error technique. The identification results show a high confidence level and goodness of fit between the estimated model and actual aircraft response.Keywords: fixed wing UAV, system identification, black box modeling, longitudinal dynamics, least square error
Procedia PDF Downloads 32519639 Establishing a Computational Screening Framework to Identify Environmental Exposures Using Untargeted Gas-Chromatography High-Resolution Mass Spectrometry
Authors: Juni C. Kim, Anna R. Robuck, Douglas I. Walker
Abstract:
The human exposome, which includes chemical exposures over the lifetime and their effects, is now recognized as an important measure for understanding human health; however, the complexity of the data makes the identification of environmental chemicals challenging. The goal of our project was to establish a computational workflow for the improved identification of environmental pollutants containing chlorine or bromine. Using the “pattern. search” function available in the R package NonTarget, we wrote a multifunctional script that searches mass spectral clusters from untargeted gas-chromatography high-resolution mass spectrometry (GC-HRMS) for the presence of spectra consistent with chlorine and bromine-containing organic compounds. The “pattern. search” function was incorporated into a different function that allows the evaluation of clusters containing multiple analyte fragments, has multi-core support, and provides a simplified output identifying listing compounds containing chlorine and/or bromine. The new function was able to process 46,000 spectral clusters in under 8 seconds and identified over 150 potential halogenated spectra. We next applied our function to a deidentified dataset from patients diagnosed with primary biliary cholangitis (PBC), primary sclerosing cholangitis (PSC), and healthy controls. Twenty-two spectra corresponded to potential halogenated compounds in the PSC and PBC dataset, including six significantly different in PBC patients, while four differed in PSC patients. We have developed an improved algorithm for detecting halogenated compounds in GC-HRMS data, providing a strategy for prioritizing exposures in the study of human disease.Keywords: exposome, metabolome, computational metabolomics, high-resolution mass spectrometry, exposure, pollutants
Procedia PDF Downloads 13819638 A Data Driven Approach for the Degradation of a Lithium-Ion Battery Based on Accelerated Life Test
Authors: Alyaa M. Younes, Nermine Harraz, Mohammad H. Elwany
Abstract:
Lithium ion batteries are currently used for many applications including satellites, electric vehicles and mobile electronics. Their ability to store relatively large amount of energy in a limited space make them most appropriate for critical applications. Evaluation of the life of these batteries and their reliability becomes crucial to the systems they support. Reliability of Li-Ion batteries has been mainly considered based on its lifetime. However, another important factor that can be considered critical in many applications such as in electric vehicles is the cycle duration. The present work presents the results of an experimental investigation on the degradation behavior of a Laptop Li-ion battery (type TKV2V) and the effect of applied load on the battery cycle time. The reliability was evaluated using an accelerated life test. Least squares linear regression with median rank estimation was used to estimate the Weibull distribution parameters needed for the reliability functions estimation. The probability density function, failure rate and reliability function under each of the applied loads were evaluated and compared. An inverse power model is introduced that can predict cycle time at any stress level given.Keywords: accelerated life test, inverse power law, lithium-ion battery, reliability evaluation, Weibull distribution
Procedia PDF Downloads 16819637 Reliability Prediction of Tires Using Linear Mixed-Effects Model
Authors: Myung Hwan Na, Ho- Chun Song, EunHee Hong
Abstract:
We widely use normal linear mixed-effects model to analysis data in repeated measurement. In case of detecting heteroscedasticity and the non-normality of the population distribution at the same time, normal linear mixed-effects model can give improper result of analysis. To achieve more robust estimation, we use heavy tailed linear mixed-effects model which gives more exact and reliable analysis conclusion than standard normal linear mixed-effects model.Keywords: reliability, tires, field data, linear mixed-effects model
Procedia PDF Downloads 56419636 Towards a Measurement-Based E-Government Portals Maturity Model
Authors: Abdoullah Fath-Allah, Laila Cheikhi, Rafa E. Al-Qutaish, Ali Idri
Abstract:
The e-government emerging concept transforms the way in which the citizens are dealing with their governments. Thus, the citizens can execute the intended services online anytime and anywhere. This results in great benefits for both the governments (reduces the number of officers) and the citizens (more flexibility and time saving). Therefore, building a maturity model to assess the e-government portals becomes desired to help in the improvement process of such portals. This paper aims at proposing an e-government maturity model based on the measurement of the best practices’ presence. The main benefit of such maturity model is to provide a way to rank an e-government portal based on the used best practices, and also giving a set of recommendations to go to the higher stage in the maturity model.Keywords: best practices, e-government portal, maturity model, quality model
Procedia PDF Downloads 33819635 Rationale of Eye Pupillary Diameter for the UV Protection for Sunglasses
Authors: Liliane Ventura, Mauro Masili
Abstract:
Ultraviolet (UV) protection is critical for sunglasses, and mydriasis, as well as miosis, are relevant parameters to consider. The literature reports that for sunglasses, ultraviolet protection is critical because sunglasses can cause the opposite effect if the lenses do not provide adequate UV protection due to the greater dilation of the pupil when wearing sunglasses. However, the scientific literature does not properly quantify to support this rationale. The reasoning may be misleading by ignoring not only the inherent absorption of UV by the sunglass lens materials but also by ignoring the absorption of the anterior structures of the eye, i.e., the cornea and aqueous humor. Therefore, we estimate the pupil diameter and calculate the solar ultraviolet influx through the pupil of the human eye for two situations of an individual wearing and not wearing sunglasses. We quantify the dilation of the pupil as a function of the luminance of the surrounding. Therefore, we calculate the influx of solar UV through the pupil of the eye for two situations for an individual wearing sunglass and for the eyes free of shade. A typical boundary condition for the calculation is an individual in an upright position wearing sunglasses, staring at the horizon as if the sun is in the zenith. The calculation was done for the latitude of the geographic center of the state of São Paulo (-22º04'11.8'' S) from sunrise to sunset. A model from the literature is used for determining the sky luminance. The initial approach is to obtain pupil diameter as a function of luminance. Therefore, as a preliminary result, we calculate the pupil diameter as a function of the time of day, as the sun moves, for a particular day of the year. The working range for luminance is daylight (10⁻⁴ – 10⁵ cd/m²). We are able to show how the pupil adjusts to brightness change (~2 - ~7.8 mm). At noon, with the sun higher, the direct incidence of light on the pupil is lower if compared to mid-morning or mid-afternoon, when the sun strikes more directly into the eye. Thus, the pupil is larger at midday. As expected, the two situations have opposite behaviors since higher luminance implies a smaller pupil. With these results, we can progress in the short term to obtain the transmittance spectra of sunglasses samples and quantify how light attenuation provided by the spectacles affects pupil diameter.Keywords: sunglasses, UV protection, pupil diameter, solar irradiance, luminance
Procedia PDF Downloads 8119634 High Fidelity Interactive Video Segmentation Using Tensor Decomposition, Boundary Loss, Convolutional Tessellations, and Context-Aware Skip Connections
Authors: Anthony D. Rhodes, Manan Goel
Abstract:
We provide a high fidelity deep learning algorithm (HyperSeg) for interactive video segmentation tasks using a dense convolutional network with context-aware skip connections and compressed, 'hypercolumn' image features combined with a convolutional tessellation procedure. In order to maintain high output fidelity, our model crucially processes and renders all image features in high resolution, without utilizing downsampling or pooling procedures. We maintain this consistent, high grade fidelity efficiently in our model chiefly through two means: (1) we use a statistically-principled, tensor decomposition procedure to modulate the number of hypercolumn features and (2) we render these features in their native resolution using a convolutional tessellation technique. For improved pixel-level segmentation results, we introduce a boundary loss function; for improved temporal coherence in video data, we include temporal image information in our model. Through experiments, we demonstrate the improved accuracy of our model against baseline models for interactive segmentation tasks using high resolution video data. We also introduce a benchmark video segmentation dataset, the VFX Segmentation Dataset, which contains over 27,046 high resolution video frames, including green screen and various composited scenes with corresponding, hand-crafted, pixel-level segmentations. Our work presents a improves state of the art segmentation fidelity with high resolution data and can be used across a broad range of application domains, including VFX pipelines and medical imaging disciplines.Keywords: computer vision, object segmentation, interactive segmentation, model compression
Procedia PDF Downloads 12019633 Reliability and Cost Focused Optimization Approach for a Communication Satellite Payload Redundancy Allocation Problem
Authors: Mehmet Nefes, Selman Demirel, Hasan H. Ertok, Cenk Sen
Abstract:
A typical reliability engineering problem regarding communication satellites has been considered to determine redundancy allocation scheme of power amplifiers within payload transponder module, whose dominant function is to amplify power levels of the received signals from the Earth, through maximizing reliability against mass, power, and other technical limitations. Adding each redundant power amplifier component increases not only reliability but also hardware, testing, and launch cost of a satellite. This study investigates a multi-objective approach used in order to solve Redundancy Allocation Problem (RAP) for a communication satellite payload transponder, focusing on design cost due to redundancy and reliability factors. The main purpose is to find the optimum power amplifier redundancy configuration satisfying reliability and capacity thresholds simultaneously instead of analyzing respectively or independently. A mathematical model and calculation approach are instituted including objective function definitions, and then, the problem is solved analytically with different input parameters in MATLAB environment. Example results showed that payload capacity and failure rate of power amplifiers have remarkable effects on the solution and also processing time.Keywords: communication satellite payload, multi-objective optimization, redundancy allocation problem, reliability, transponder
Procedia PDF Downloads 26119632 Crashworthiness Optimization of an Automotive Front Bumper in Composite Material
Authors: S. Boria
Abstract:
In the last years, the crashworthiness of an automotive body structure can be improved, since the beginning of the design stage, thanks to the development of specific optimization tools. It is well known how the finite element codes can help the designer to investigate the crashing performance of structures under dynamic impact. Therefore, by coupling nonlinear mathematical programming procedure and statistical techniques with FE simulations, it is possible to optimize the design with reduced number of analytical evaluations. In engineering applications, many optimization methods which are based on statistical techniques and utilize estimated models, called meta-models, are quickly spreading. A meta-model is an approximation of a detailed simulation model based on a dataset of input, identified by the design of experiments (DOE); the number of simulations needed to build it depends on the number of variables. Among the various types of meta-modeling techniques, Kriging method seems to be excellent in accuracy, robustness and efficiency compared to other ones when applied to crashworthiness optimization. Therefore the application of such meta-model was used in this work, in order to improve the structural optimization of a bumper for a racing car in composite material subjected to frontal impact. The specific energy absorption represents the objective function to maximize and the geometrical parameters subjected to some design constraints are the design variables. LS-DYNA codes were interfaced with LS-OPT tool in order to find the optimized solution, through the use of a domain reduction strategy. With the use of the Kriging meta-model the crashworthiness characteristic of the composite bumper was improved.Keywords: composite material, crashworthiness, finite element analysis, optimization
Procedia PDF Downloads 25619631 Fractional-Order PI Controller Tuning Rules for Cascade Control System
Authors: Truong Nguyen Luan Vu, Le Hieu Giang, Le Linh
Abstract:
The fractional–order proportional integral (FOPI) controller tuning rules based on the fractional calculus for the cascade control system are systematically proposed in this paper. Accordingly, the ideal controller is obtained by using internal model control (IMC) approach for both the inner and outer loops, which gives the desired closed-loop responses. On the basis of the fractional calculus, the analytical tuning rules of FOPI controller for the inner loop can be established in the frequency domain. Besides, the outer loop is tuned by using any integer PI/PID controller tuning rules in the literature. The simulation study is considered for the stable process model and the results demonstrate the simplicity, flexibility, and effectiveness of the proposed method for the cascade control system in compared with the other methods.Keywords: Bode’s ideal transfer function, fractional calculus, fractional–order proportional integral (FOPI) controller, cascade control system
Procedia PDF Downloads 37719630 Viscoelastic Modeling of Hot Mix Asphalt (HMA) under Repeated Loading by Using Finite Element Method
Authors: S. A. Tabatabaei, S. Aarabi
Abstract:
Predicting the hot mix asphalt (HMA) response and performance is a challenging task because of the subjectivity of HMA under the complex loading and environmental condition. The behavior of HMA is a function of temperature of loading and also shows the time and rate-dependent behavior directly affecting design criteria of mixture. Velocity of load passing make the time and rate. The viscoelasticity illustrates the reaction of HMA under loading and environmental conditions such as temperature and moisture effect. The behavior has direct effect on design criteria such as tensional strain and vertical deflection. In this paper, the computational framework for viscoelasticity and implementation in 3D dimensional HMA model is introduced to use in finite element method. The model was lied under various repeated loading conditions at constant temperature. The response of HMA viscoelastic behavior is investigated in loading condition under speed vehicle and sensitivity of behavior to the range of speed and compared to HMA which is supposed to have elastic behavior as in conventional design methods. The results show the importance of loading time pulse, unloading time and various speeds on design criteria. Also the importance of memory fading of material to storing the strain and stress due to repeated loading was shown. The model was simulated by ABAQUS finite element packageKeywords: viscoelasticity, finite element method, repeated loading, HMA
Procedia PDF Downloads 39819629 The Secrecy Capacity of the Semi-Deterministic Wiretap Channel with Three State Information
Authors: Mustafa El-Halabi
Abstract:
A general model of wiretap channel with states is considered, where the legitimate receiver and the wiretapper’s observations depend on three states S1, S2 and S3. State S1 is non-causally known to the encoder, S2 is known to the receiver, and S3 remains unknown. A secure coding scheme, based using structured-binning, is proposed, and it is shown to achieve the secrecy capacity when the signal at legitimate receiver is a deterministic function of the input.Keywords: physical layer security, interference, side information, secrecy capacity
Procedia PDF Downloads 38919628 CFD Simulation of a Large Scale Unconfined Hydrogen Deflagration
Authors: I. C. Tolias, A. G. Venetsanos, N. Markatos
Abstract:
In the present work, CFD simulations of a large scale open deflagration experiment are performed. Stoichiometric hydrogen-air mixture occupies a 20 m hemisphere. Two combustion models are compared and are evaluated against the experiment. The Eddy Dissipation Model and a Multi-physics combustion model which is based on Yakhot’s equation for the turbulent flame speed. The values of models’ critical parameters are investigated. The effect of the turbulence model is also examined. k-ε model and LES approach were tested.Keywords: CFD, deflagration, hydrogen, combustion model
Procedia PDF Downloads 50219627 An Algorithm to Find Fractional Edge Domination Number and Upper Fractional Edge Domination Number of an Intuitionistic Fuzzy Graph
Authors: Karunambigai Mevani Govindasamy, Sathishkumar Ayyappan
Abstract:
In this paper, we formulate the algorithm to find out the dominating function parameters of Intuitionistic Fuzzy Graphs(IFG). The methodology we adopted here is converting any physical problem into an IFG, and that has been transformed into Intuitionistic Fuzzy Matrix. Using Linear Program Solver software (LiPS), we found the defined parameters for the given IFG. We obtained these parameters for a path and cycle IFG. This study can be extended to other varieties of IFG. In particular, we obtain the definition of edge dominating function, minimal edge dominating function, fractional edge domination number (γ_if^') and upper fractional edge domination number (Γ_if^') of an intuitionistic fuzzy graph. Also, we formulated an algorithm which is appropriate to work on LiPS to find fractional edge domination number and upper fractional edge domination number of an IFG.Keywords: fractional edge domination number, intuitionistic fuzzy cycle, intuitionistic fuzzy graph, intuitionistic fuzzy path
Procedia PDF Downloads 17619626 A Framework for Consumer Selection on Travel Destinations
Authors: J. Rhodes, V. Cheng, P. Lok
Abstract:
The aim of this study is to develop a parsimonious model that explains the effect of different stimulus on a tourist’s intention to visit a new destination. The model consists of destination trust and interest as the mediating variables. The model was tested using two different types of stimulus; both studies empirically supported the proposed model. Furthermore, the first study revealed that advertising has a stronger effect than positive online reviews. The second study found that the peripheral route of the elaboration likelihood model has a stronger influence power than the central route in this context.Keywords: advertising, electronic word-of-mouth, elaboration likelihood model, intention to visit, trust
Procedia PDF Downloads 45819625 Study of the Polymer Elastic Behavior in the Displacement Oil Drops at Pore Scale
Authors: Luis Prada, Jose Gomez, Arlex Chaves, Julio Pedraza
Abstract:
Polymeric liquids have been used in the oil industry, especially at enhanced oil recovery (EOR). From the rheological point of view, polymers have the particularity of being viscoelastic liquids. One of the most common and useful models to describe that behavior is the Upper Convected Maxwell model (UCM). The main characteristic of the polymer used in EOR process is the increase in viscosity which pushes the oil outside of the reservoir. The elasticity could contribute in the drag of the oil that stays in the reservoir. Studying the elastic effect on the oil drop at the pore scale, bring an explanation if the addition of elastic force could mobilize the oil. This research explores if the contraction and expansion of the polymer in the pore scale may increase the elastic behavior of this kind of fluid. For that reason, this work simplified the pore geometry and build two simple geometries with micrometer lengths. Using source terms with the user define a function this work introduces the UCM model in the ANSYS fluent simulator with the purpose of evaluating the elastic effect of the polymer in a contraction and expansion geometry. Also, using the Eulerian multiphase model, this research considers the possibility that extra elastic force will show a deformation effect on the oil; for that reason, this work considers an oil drop on the upper wall of the geometry. Finally, all the simulations exhibit that at the pore scale conditions exist extra vortices at UCM model but is not possible to deform the oil completely and push it outside of the restrictions, also this research find the conditions for the oil displacement.Keywords: ANSYS fluent, interfacial fluids mechanics, polymers, pore scale, viscoelasticity
Procedia PDF Downloads 13219624 A Combined AHP-GP Model for Selecting Knowledge Management Tool
Authors: Ahmad Sarfaraz, Raiyad Herwies
Abstract:
In this paper, a multi-criteria decision making analysis is used to help any organization selects the best KM tool that fits and serves its needs. The AHP model is used based on a previous study to highlight and identify the main criteria and sub-criteria that are incorporated in the selection process. Different KM tools alternatives with different criteria are compared and weighted accurately to be incorporated in the GP model. The main goal is to combine the GP model with the AHP model to ensure that selecting the KM tool considers the resource constraints. Two important issues are discussed in this paper: how different factors could be taken into consideration in forming the AHP model, and how to incorporate the AHP results into the GP model for better results.Keywords: knowledge management, analytical hierarchy process, goal programming, multi-criteria decision making
Procedia PDF Downloads 38519623 Modeling of Strong Motion Generation Areas of the 2011 Tohoku, Japan Earthquake Using Modified Semi-Empirical Technique Incorporating Frequency Dependent Radiation Pattern Model
Authors: Sandeep, A. Joshi, Kamal, Piu Dhibar, Parveen Kumar
Abstract:
In the present work strong ground motion has been simulated using a modified semi-empirical technique (MSET), with frequency dependent radiation pattern model. Joshi et al. (2014) have modified the semi-empirical technique to incorporate the modeling of strong motion generation areas (SMGAs). A frequency dependent radiation pattern model is applied to simulate high frequency ground motion more precisely. Identified SMGAs (Kurahashi and Irikura 2012) of the 2011 Tohoku earthquake (Mw 9.0) were modeled using this modified technique. Records are simulated for both frequency dependent and constant radiation pattern function. Simulated records for both cases are compared with observed records in terms of peak ground acceleration and pseudo acceleration response spectra at different stations. Comparison of simulated and observed records in terms of root mean square error suggests that the method is capable of simulating record which matches in a wide frequency range for this earthquake and bears realistic appearance in terms of shape and strong motion parameters. The results confirm the efficacy and suitability of rupture model defined by five SMGAs for the developed modified technique.Keywords: strong ground motion, semi-empirical, strong motion generation area, frequency dependent radiation pattern, 2011 Tohoku Earthquake
Procedia PDF Downloads 53719622 Long Short-Time Memory Neural Networks for Human Driving Behavior Modelling
Authors: Lu Zhao, Nadir Farhi, Yeltsin Valero, Zoi Christoforou, Nadia Haddadou
Abstract:
In this paper, a long short-term memory (LSTM) neural network model is proposed to replicate simultaneously car-following and lane-changing behaviors in road networks. By combining two kinds of LSTM layers and three input designs of the neural network, six variants of the LSTM model have been created. These models were trained and tested on the NGSIM 101 dataset, and the results were evaluated in terms of longitudinal speed and lateral position, respectively. Then, we compared the LSTM model with a classical car-following model (the intelligent driving model (IDM)) in the part of speed decision. In addition, the LSTM model is compared with a model using classical neural networks. After the comparison, the LSTM model demonstrates higher accuracy than the physical model IDM in terms of car-following behavior and displays better performance with regard to both car-following and lane-changing behavior compared to the classical neural network model.Keywords: traffic modeling, neural networks, LSTM, car-following, lane-change
Procedia PDF Downloads 26119621 Research and Design of Functional Mixed Community: A Model Based on the Construction of New Districts in China
Authors: Wu Chao
Abstract:
The urban design of the new district in China is different from other existing cities at the city planning level, including Beijing, Shanghai, Guangzhou, etc. And the urban problems of these super-cities are same as many big cities around the world. The goal of the new district construction plan is to enable people to live comfortably, to improve the well-being of residents, and to create a way of life different from that of other urban communities. To avoid the emergence of the super community, the idea of "decentralization" is taken as the overall planning idea, and the function and form of each community are set up with a homogeneous allocation of resources so that the community can grow naturally. Similar to the growth of vines in nature, each community groups are independent and connected through roads, with clear community boundaries that limit their unlimited expansion. With a community contained 20,000 people as a case, the community is a mixture for living, production, office, entertainment, and other functions. Based on the development of the Internet, to create more space for public use, and can use data to allocate resources in real time. And this kind of shared space is the main part of the activity space in the community. At the same time, the transformation of spatial function can be determined by the usage feedback of all kinds of existing space, and the use of space can be changed by the changing data. Take the residential unit as the basic building function mass, take the lower three to four floors of the building as the main flexible space for use, distribute functions such as entertainment, service, office, etc. For the upper living space, set up a small amount of indoor and outdoor activity space, also used as shared space. The transformable space of the bottom layer is evenly distributed, combined with the walking space connected the community, the service and entertainment network can be formed in the whole community, and can be used in most of the community space. With the basic residential unit as the replicable module, the design of the other residential units runs through the idea of decentralization and the concept of the vine community, and the various units are reasonably combined. At the same time, a small number of office buildings are added to meet the special office needs. The new functional mixed community can change many problems of the present city in the future construction, at the same time, it can keep its vitality through the adjustment function of the Internet.Keywords: decentralization, mixed functional community, shared space, spatial usage data
Procedia PDF Downloads 12319620 Distances over Incomplete Diabetes and Breast Cancer Data Based on Bhattacharyya Distance
Authors: Loai AbdAllah, Mahmoud Kaiyal
Abstract:
Missing values in real-world datasets are a common problem. Many algorithms were developed to deal with this problem, most of them replace the missing values with a fixed value that was computed based on the observed values. In our work, we used a distance function based on Bhattacharyya distance to measure the distance between objects with missing values. Bhattacharyya distance, which measures the similarity of two probability distributions. The proposed distance distinguishes between known and unknown values. Where the distance between two known values is the Mahalanobis distance. When, on the other hand, one of them is missing the distance is computed based on the distribution of the known values, for the coordinate that contains the missing value. This method was integrated with Wikaya, a digital health company developing a platform that helps to improve prevention of chronic diseases such as diabetes and cancer. In order for Wikaya’s recommendation system to work distance between users need to be measured. Since there are missing values in the collected data, there is a need to develop a distance function distances between incomplete users profiles. To evaluate the accuracy of the proposed distance function in reflecting the actual similarity between different objects, when some of them contain missing values, we integrated it within the framework of k nearest neighbors (kNN) classifier, since its computation is based only on the similarity between objects. To validate this, we ran the algorithm over diabetes and breast cancer datasets, standard benchmark datasets from the UCI repository. Our experiments show that kNN classifier using our proposed distance function outperforms the kNN using other existing methods.Keywords: missing values, incomplete data, distance, incomplete diabetes data
Procedia PDF Downloads 22519619 Designing Function Knitted and Woven Upholstery Textile With SCOPY Film
Authors: Manar Y. Abd El-Aziz, Alyaa E. Morgham, Amira A. El-Fallal, Heba Tolla E. Abo El Naga
Abstract:
Different textile materials are usually used in upholstery. However, upholstery parts may become unhealthy when dust accrues and bacteria raise on the surface, which negatively affects the user's health. Also, leather and artificial leather were used in upholstery but, leather has a high cost and artificial leather has a potential chemical risk for users. Researchers have advanced vegie leather made from bacterial cellulose a symbiotic culture of bacteria and yeast (SCOBY). SCOBY remains a gelatinous, cellulose biofilm discovered floating at the air-liquid interface of the container. But this leather still needs some enhancement for its mechanical properties. This study aimed to prepare SCOBY, produce bamboo rib knitted fabrics with two different stitch densities, and cotton woven fabric then laminate these fabrics with the prepared SCOBY film to enhance the mechanical properties of the SCOBY leather at the same time; add anti-microbial function to the prepared fabrics. Laboratory tests were conducted on the produced samples, including tests for function properties; anti-microbial, thermal conductivity and light transparency. Physical properties; thickness and mass per unit. Mechanical properties; elongation, tensile strength, young modulus, and peel force. The results showed that the type of the fabric affected significantly SCOBY properties. According to the test results, the bamboo knitted fabric with higher stitch density laminated with SCOBY was chosen for its tensile strength and elongation as the upholstery of a bed model with antimicrobial properties and comfortability in the headrest design. Also, the single layer of SCOBY was chosen regarding light transparency and lower thermal conductivity for the creation of a lighting unit built into the bed headboard.Keywords: anti-microbial, bamboo, rib, SCOPY, upholstery
Procedia PDF Downloads 6419618 AgriFood Model in Ankara Regional Innovation Strategy
Authors: Coskun Serefoglu
Abstract:
The study aims to analyse how a traditional sector such as agri-food could be mobilized through regional innovation strategies. A principal component analysis as well as qualitative information, such as in-depth interviews, focus group and surveys, were employed to find the priority sectors. An agri-food model was developed which includes both a linear model and interactive model. The model consists of two main components, one of which is technological integration and the other one is agricultural extension which is based on Land-grant university approach of U.S. which is not a common practice in Turkey.Keywords: regional innovation strategy, interactive model, agri-food sector, local development, planning, regional development
Procedia PDF Downloads 14919617 A Multi-Objective Decision Making Model for Biodiversity Conservation and Planning: Exploring the Concept of Interdependency
Authors: M. Mohan, J. P. Roise, G. P. Catts
Abstract:
Despite living in an era where conservation zones are de-facto the central element in any sustainable wildlife management strategy, we still find ourselves grappling with several pareto-optimal situations regarding resource allocation and area distribution for the same. In this paper, a multi-objective decision making (MODM) model is presented to answer the question of whether or not we can establish mutual relationships between these contradicting objectives. For our study, we considered a Red-cockaded woodpecker (Picoides borealis) habitat conservation scenario in the coastal plain of North Carolina, USA. Red-cockaded woodpecker (RCW) is a non-migratory territorial bird that excavates cavities in living pine trees for roosting and nesting. The RCW groups nest in an aggregation of cavity trees called ‘cluster’ and for our model we use the number of clusters to be established as a measure of evaluating the size of conservation zone required. The case study is formulated as a linear programming problem and the objective function optimises the Red-cockaded woodpecker clusters, carbon retention rate, biofuel, public safety and Net Present Value (NPV) of the forest. We studied the variation of individual objectives with respect to the amount of area available and plotted a two dimensional dynamic graph after establishing interrelations between the objectives. We further explore the concept of interdependency by integrating the MODM model with GIS, and derive a raster file representing carbon distribution from the existing forest dataset. Model results demonstrate the applicability of interdependency from both linear and spatial perspectives, and suggest that this approach holds immense potential for enhancing environmental investment decision making in future.Keywords: conservation, interdependency, multi-objective decision making, red-cockaded woodpecker
Procedia PDF Downloads 33719616 Deep Reinforcement Learning Model Using Parameterised Quantum Circuits
Authors: Lokes Parvatha Kumaran S., Sakthi Jay Mahenthar C., Sathyaprakash P., Jayakumar V., Shobanadevi A.
Abstract:
With the evolution of technology, the need to solve complex computational problems like machine learning and deep learning has shot up. But even the most powerful classical supercomputers find it difficult to execute these tasks. With the recent development of quantum computing, researchers and tech-giants strive for new quantum circuits for machine learning tasks, as present works on Quantum Machine Learning (QML) ensure less memory consumption and reduced model parameters. But it is strenuous to simulate classical deep learning models on existing quantum computing platforms due to the inflexibility of deep quantum circuits. As a consequence, it is essential to design viable quantum algorithms for QML for noisy intermediate-scale quantum (NISQ) devices. The proposed work aims to explore Variational Quantum Circuits (VQC) for Deep Reinforcement Learning by remodeling the experience replay and target network into a representation of VQC. In addition, to reduce the number of model parameters, quantum information encoding schemes are used to achieve better results than the classical neural networks. VQCs are employed to approximate the deep Q-value function for decision-making and policy-selection reinforcement learning with experience replay and the target network.Keywords: quantum computing, quantum machine learning, variational quantum circuit, deep reinforcement learning, quantum information encoding scheme
Procedia PDF Downloads 134