Search results for: lumped parameter model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18097

Search results for: lumped parameter model

17077 An Agent-Based Modeling and Simulation of Human Muscle

Authors: Sina Saadati, Mohammadreza Razzazi

Abstract:

In this article, we have tried to present an agent-based model of human muscle. A suitable model of muscle is necessary for the analysis of mankind's movements. It can be used by clinical researchers who study the influence of motion sicknesses, like Parkinson's disease. It is also useful in the development of a prosthesis that receives the electromyography signals and generates force as a reaction. Since we have focused on computational efficiency in this research, the model can compute the calculations very fast. As far as it concerns prostheses, the model can be known as a charge-efficient method. In this paper, we are about to illustrate an agent-based model. Then, we will use it to simulate the human gait cycle. This method can also be done reversely in the analysis of gait in motion sicknesses.

Keywords: agent-based modeling and simulation, human muscle, gait cycle, motion sickness

Procedia PDF Downloads 114
17076 Urban Logistics Dynamics: A User-Centric Approach to Traffic Modelling and Kinetic Parameter Analysis

Authors: Emilienne Lardy, Eric Ballot, Mariam Lafkihi

Abstract:

Efficient urban logistics requires a comprehensive understanding of traffic dynamics, particularly as it pertains to kinetic parameters influencing energy consumption and trip duration estimations. While real-time traffic information is increasingly accessible, current high-precision forecasting services embedded in route planning often function as opaque 'black boxes' for users. These services, typically relying on AI-processed counting data, fall short in accommodating open design parameters essential for management studies, notably within Supply Chain Management. This work revisits the modelling of traffic conditions in the context of city logistics, emphasizing its significance from the user’s point of view, with two focuses. Firstly, the focus is not on the vehicle flow but on the vehicles themselves and the impact of the traffic conditions on their driving behaviour. This means opening the range of studied indicators beyond vehicle speed, to describe extensively the kinetic and dynamic aspects of the driving behaviour. To achieve this, we leverage the Art. Kinema parameters are designed to characterize driving cycles. Secondly, this study examines how the driving context (i.e., exogenous factors to the traffic flow) determines the mentioned driving behaviour. Specifically, we explore how accurately the kinetic behaviour of a vehicle can be predicted based on a limited set of exogenous factors, such as time, day, road type, orientation, slope, and weather conditions. To answer this question, statistical analysis was conducted on real-world driving data, which includes high-frequency measurements of vehicle speed. A Factor Analysis and a Generalized Linear Model have been established to link kinetic parameters with independent categorical contextual variables. The results include an assessment of the adjustment quality and the robustness of the models, as well as an overview of the model’s outputs.

Keywords: factor analysis, generalised linear model, real world driving data, traffic congestion, urban logistics, vehicle kinematics

Procedia PDF Downloads 67
17075 Thickness Effect on Concrete Fracture Toughness K1c

Authors: Benzerara Mohammed, Redjel Bachir, Kebaili Bachir

Abstract:

The cracking of the concrete is a more crucial problem with the development of the complex structures related to technological progress. The projections in the knowledge of the breaking process make it possible today for better prevention of the risk of the fracture. The breaking strength brutal of a quasi-fragile material like the concrete called Toughness, is measured by a breaking value of the factor of intensity of the constraints K1C for which the crack is propagated, it is an intrinsic property of material. Many studies reported in the literature treating of the concrete were carried out on specimens which are in fact inadequate compared to the intrinsic characteristic to identify. We started from this established fact, in order to compare the evolution of the parameter of toughness K1C measured by calling upon ordinary concrete specimens of three prismatics geometries different (10*10*84) cm³ and (5*20*120) cm³ &(12*20*120) cm³ containing from the side notches various depths simulating of the cracks was set up. The notches are carried out using triangular pyramidal plates into manufactured out of sheet coated placed at the centre of the specimens at the time of the casting, then withdrawn to leave the trace of a crack. The tests are carried out in 3 points bending test in mode 1 of fracture, by using the techniques of mechanical fracture. The evolution of the parameter of toughness K1C measured with the three geometries specimens gives almost the same results. They are acceptable and return in the beach of the results determined by various researchers (toughness of the ordinary concrete turns to the turn of the 1 MPa √m). These results inform us about the presence of an economy on the level of the geometrie specimen (5*20*120) cm³, therefore to use plates specimens later if one wants to master the toughness of this material complexes, astonishing but always essential that is the concrete.

Keywords: elementary representative volume, concrete, fissure, toughness

Procedia PDF Downloads 223
17074 Reconsidering Taylor’s Law with Chaotic Population Dynamical Systems

Authors: Yuzuru Mitsui, Takashi Ikegami

Abstract:

The exponents of Taylor’s law in deterministic chaotic systems are computed, and their meanings are intensively discussed. Taylor’s law is the scaling relationship between the mean and variance (in both space and time) of population abundance, and this law is known to hold in a variety of ecological time series. The exponents found in the temporal Taylor’s law are different from those of the spatial Taylor’s law. The temporal Taylor’s law is calculated on the time series from the same locations (or the same initial states) of different temporal phases. However, with the spatial Taylor’s law, the mean and variance are calculated from the same temporal phase sampled from different places. Most previous studies were done with stochastic models, but we computed the temporal and spatial Taylor’s law in deterministic systems. The temporal Taylor’s law evaluated using the same initial state, and the spatial Taylor’s law was evaluated using the ensemble average and variance. There were two main discoveries from this work. First, it is often stated that deterministic systems tend to have the value two for Taylor’s exponent. However, most of the calculated exponents here were not two. Second, we investigated the relationships between chaotic features measured by the Lyapunov exponent, the correlation dimension, and other indexes with Taylor’s exponents. No strong correlations were found; however, there is some relationship in the same model, but with different parameter values, and we will discuss the meaning of those results at the end of this paper.

Keywords: chaos, density effect, population dynamics, Taylor’s law

Procedia PDF Downloads 174
17073 Time Parameter Based for the Detection of Catastrophic Faults in Analog Circuits

Authors: Arabi Abderrazak, Bourouba Nacerdine, Ayad Mouloud, Belaout Abdeslam

Abstract:

In this paper, a new test technique of analog circuits using time mode simulation is proposed for the single catastrophic faults detection in analog circuits. This test process is performed to overcome the problem of catastrophic faults being escaped in a DC mode test applied to the inverter amplifier in previous research works. The circuit under test is a second-order low pass filter constructed around this type of amplifier but performing a function that differs from that of the previous test. The test approach performed in this work is based on two key- elements where the first one concerns the unique square pulse signal selected as an input vector test signal to stimulate the fault effect at the circuit output response. The second element is the filter response conversion to a square pulses sequence obtained from an analog comparator. This signal conversion is achieved through a fixed reference threshold voltage of this comparison circuit. The measurement of the three first response signal pulses durations is regarded as fault effect detection parameter on one hand, and as a fault signature helping to hence fully establish an analog circuit fault diagnosis on another hand. The results obtained so far are very promising since the approach has lifted up the fault coverage ratio in both modes to over 90% and has revealed the harmful side of faults that has been masked in a DC mode test.

Keywords: analog circuits, analog faults diagnosis, catastrophic faults, fault detection

Procedia PDF Downloads 442
17072 Effect of Monotonically Decreasing Parameters on Margin Softmax for Deep Face Recognition

Authors: Umair Rashid

Abstract:

Normally softmax loss is used as the supervision signal in face recognition (FR) system, and it boosts the separability of features. In the last two years, a number of techniques have been proposed by reformulating the original softmax loss to enhance the discriminating power of Deep Convolutional Neural Networks (DCNNs) for FR system. To learn angularly discriminative features Cosine-Margin based softmax has been adjusted as monotonically decreasing angular function, that is the main challenge for angular based softmax. On that issue, we propose monotonically decreasing element for Cosine-Margin based softmax and also, we discussed the effect of different monotonically decreasing parameters on angular Margin softmax for FR system. We train the model on publicly available dataset CASIA- WebFace via our proposed monotonically decreasing parameters for cosine function and the tests on YouTube Faces (YTF, Labeled Face in the Wild (LFW), VGGFace1 and VGGFace2 attain the state-of-the-art performance.

Keywords: deep convolutional neural networks, cosine margin face recognition, softmax loss, monotonically decreasing parameter

Procedia PDF Downloads 101
17071 Multinomial Dirichlet Gaussian Process Model for Classification of Multidimensional Data

Authors: Wanhyun Cho, Soonja Kang, Sanggoon Kim, Soonyoung Park

Abstract:

We present probabilistic multinomial Dirichlet classification model for multidimensional data and Gaussian process priors. Here, we have considered an efficient computational method that can be used to obtain the approximate posteriors for latent variables and parameters needed to define the multiclass Gaussian process classification model. We first investigated the process of inducing a posterior distribution for various parameters and latent function by using the variational Bayesian approximations and important sampling method, and next we derived a predictive distribution of latent function needed to classify new samples. The proposed model is applied to classify the synthetic multivariate dataset in order to verify the performance of our model. Experiment result shows that our model is more accurate than the other approximation methods.

Keywords: multinomial dirichlet classification model, Gaussian process priors, variational Bayesian approximation, importance sampling, approximate posterior distribution, marginal likelihood evidence

Procedia PDF Downloads 444
17070 The Use of Haar Wavelet Mother Signal Tool for Performance Analysis Response of Distillation Column (Application to Moroccan Case Study)

Authors: Mahacine Amrani

Abstract:

This paper aims at reviewing some Moroccan industrial applications of wavelet especially in the dynamic identification of a process model using Haar wavelet mother response. Two recent Moroccan study cases are described using dynamic data originated by a distillation column and an industrial polyethylene process plant. The purpose of the wavelet scheme is to build on-line dynamic models. In both case studies, a comparison is carried out between the Haar wavelet mother response model and a linear difference equation model. Finally it concludes, on the base of the comparison of the process performances and the best responses, which may be useful to create an estimated on-line internal model control and its application towards model-predictive controllers (MPC). All calculations were implemented using AutoSignal Software.

Keywords: process performance, model, wavelets, Haar, Moroccan

Procedia PDF Downloads 317
17069 Surface Modification of Titanium Alloy with Laser Treatment

Authors: Nassier A. Nassir, Robert Birch, D. Rico Sierra, S. P. Edwardson, G. Dearden, Zhongwei Guan

Abstract:

The effect of laser surface treatment parameters on the residual strength of titanium alloy has been investigated. The influence of the laser surface treatment on the bonding strength between the titanium and poly-ether-ketone-ketone (PEKK) surfaces was also evaluated and compared to those offered by titanium foils without surface treatment to optimize the laser parameters. Material characterization using an optical microscope was carried out to study the microstructure and to measure the mean roughness value of the titanium surface. The results showed that the surface roughness shows a significant dependency on the laser power parameters in which surface roughness increases with the laser power increment. Moreover, the results of the tensile tests have shown that there is no significant dropping in tensile strength for the treated samples comparing to the virgin ones. In order to optimize the laser parameter as well as the corresponding surface roughness, single-lap shear tests were conducted on pairs of the laser treated titanium stripes. The results showed that the bonding shear strength between titanium alloy and PEKK film increased with the surface roughness increment to a specific limit. After this point, it is interesting to note that there was no significant effect for the laser parameter on the bonding strength. This evidence suggests that it is not necessary to use very high power of laser to treat titanium surface to achieve a good bonding strength between titanium alloy and the PEKK film.

Keywords: bonding strength, laser surface treatment, PEKK, poly-ether-ketone-ketone, titanium alloy

Procedia PDF Downloads 338
17068 Investigation on Correlation of Earthquake Intensity Parameters with Seismic Response of Reinforced Concrete Structures

Authors: Semra Sirin Kiris

Abstract:

Nonlinear dynamic analysis is permitted to be used for structures without any restrictions. The important issue is the selection of the design earthquake to conduct the analyses since quite different response may be obtained using ground motion records at the same general area even resulting from the same earthquake. In seismic design codes, the method requires scaling earthquake records based on site response spectrum to a specified hazard level. Many researches have indicated that this limitation about selection can cause a large scatter in response and other charecteristics of ground motion obtained in different manner may demonstrate better correlation with peak seismic response. For this reason influence of eleven different ground motion parameters on the peak displacement of reinforced concrete systems is examined in this paper. From conducting 7020 nonlinear time history analyses for single degree of freedom systems, the most effective earthquake parameters are given for the range of the initial periods and strength ratios of the structures. In this study, a hysteresis model for reinforced concrete called Q-hyst is used not taken into account strength and stiffness degradation. The post-yielding to elastic stiffness ratio is considered as 0.15. The range of initial period, T is from 0.1s to 0.9s with 0.1s time interval and three different strength ratios for structures are used. The magnitude of 260 earthquake records selected is higher than earthquake magnitude, M=6. The earthquake parameters related to the energy content, duration or peak values of ground motion records are PGA(Peak Ground Acceleration), PGV (Peak Ground Velocity), PGD (Peak Ground Displacement), MIV (Maximum Increamental Velocity), EPA(Effective Peak Acceleration), EPV (Effective Peak Velocity), teff (Effective Duration), A95 (Arias Intensity-based Parameter), SPGA (Significant Peak Ground Acceleration), ID (Damage Factor) and Sa (Spectral Response Spectrum).Observing the correlation coefficients between the ground motion parameters and the peak displacement of structures, different earthquake parameters play role in peak displacement demand related to the ranges formed by the different periods and the strength ratio of a reinforced concrete systems. The influence of the Sa tends to decrease for the high values of strength ratio and T=0.3s-0.6s. The ID and PGD is not evaluated as a measure of earthquake effect since high correlation with displacement demand is not observed. The influence of the A95 is high for T=0.1 but low related to the higher values of T and strength ratio. The correlation of PGA, EPA and SPGA shows the highest correlation for T=0.1s but their effectiveness decreases with high T. Considering all range of structural parameters, the MIV is the most effective parameter.

Keywords: earthquake parameters, earthquake resistant design, nonlinear analysis, reinforced concrete

Procedia PDF Downloads 152
17067 Modeling of Transformer Winding for Transients: Frequency-Dependent Proximity and Skin Analysis

Authors: Yazid Alkraimeen

Abstract:

Precise prediction of dielectric stresses and high voltages of power transformers require the accurate calculation of frequency-dependent parameters. A lack of accuracy can result in severe damages to transformer windings. Transient conditions is stuided by digital computers, which require the implementation of accurate models. This paper analyzes the computation of frequency-dependent skin and proximity losses included in the transformer winding model, using analytical equations and Finite Element Method (FEM). A modified formula to calculate the proximity and the skin losses is presented. The results of the frequency-dependent parameter calculations are verified using the Finite Element Method. The time-domain transient voltages are obtained using Numerical Inverse Laplace Transform. The results show that the classical formula for proximity losses is overestimating the transient voltages when compared with the results obtained from the modified method on a simple transformer geometry.

Keywords: fast front transients, proximity losses, transformer winding modeling, skin losses

Procedia PDF Downloads 139
17066 Model Estimation and Error Level for Okike’s Merged Irregular Transposition Cipher

Authors: Okike Benjamin, Garba E. J. D.

Abstract:

The researcher has developed a new encryption technique known as Merged Irregular Transposition Cipher. In this cipher method of encryption, a message to be encrypted is split into parts and each part encrypted separately. Before the encrypted message is transmitted to the recipient(s), the positions of the split in the encrypted messages could be swapped to ensure more security. This work seeks to develop a model by considering the split number, S and the average number of characters per split, L as the message under consideration is split from 2 through 10. Again, after developing the model, the error level in the model would be determined.

Keywords: merged irregular transposition, error level, model estimation, message splitting

Procedia PDF Downloads 314
17065 3D Multimedia Model for Educational Design Engineering

Authors: Mohanaad Talal Shakir

Abstract:

This paper tries to propose educational design by using multimedia technology for Engineering of computer Technology, Alma'ref University College in Iraq. This paper evaluates the acceptance, cognition, and interactiveness of the proposed model by students by using the statistical relationship to determine the stage of the model. Objectives of proposed education design are to develop a user-friendly software for education purposes using multimedia technology and to develop animation for 3D model to simulate assembling and disassembling process of high-speed flow.

Keywords: CAL, multimedia, shock tunnel, interactivity, engineering education

Procedia PDF Downloads 623
17064 An Unified Model for Longshore Sediment Transport Rate Estimation

Authors: Aleksandra Dudkowska, Gabriela Gic-Grusza

Abstract:

Wind wave-induced sediment transport is an important multidimensional and multiscale dynamic process affecting coastal seabed changes and coastline evolution. The knowledge about sediment transport rate is important to solve many environmental and geotechnical issues. There are many types of sediment transport models but none of them is widely accepted. It is bacause the process is not fully defined. Another problem is a lack of sufficient measurment data to verify proposed hypothesis. There are different types of models for longshore sediment transport (LST, which is discussed in this work) and cross-shore transport which is related to different time and space scales of the processes. There are models describing bed-load transport (discussed in this work), suspended and total sediment transport. LST models use among the others the information about (i) the flow velocity near the bottom, which in case of wave-currents interaction in coastal zone is a separate problem (ii) critical bed shear stress that strongly depends on the type of sediment and complicates in the case of heterogeneous sediment. Moreover, LST rate is strongly dependant on the local environmental conditions. To organize existing knowledge a series of sediment transport models intercomparisons was carried out as a part of the project “Development of a predictive model of morphodynamic changes in the coastal zone”. Four classical one-grid-point models were studied and intercompared over wide range of bottom shear stress conditions, corresponding with wind-waves conditions appropriate for coastal zone in polish marine areas. The set of models comprises classical theories that assume simplified influence of turbulence on the sediment transport (Du Boys, Meyer-Peter & Muller, Ribberink, Engelund & Hansen). It turned out that the values of estimated longshore instantaneous mass sediment transport are in general in agreement with earlier studies and measurements conducted in the area of interest. However, none of the formulas really stands out from the rest as being particularly suitable for the test location over the whole analyzed flow velocity range. Therefore, based on the models discussed a new unified formula for longshore sediment transport rate estimation is introduced, which constitutes the main original result of this study. Sediment transport rate is calculated based on the bed shear stress and critical bed shear stress. The dependence of environmental conditions is expressed by one coefficient (in a form of constant or function) thus the model presented can be quite easily adjusted to the local conditions. The discussion of the importance of each model parameter for specific velocity ranges is carried out. Moreover, it is shown that the value of near-bottom flow velocity is the main determinant of longshore bed-load in storm conditions. Thus, the accuracy of the results depends less on the sediment transport model itself and more on the appropriate modeling of the near-bottom velocities.

Keywords: bedload transport, longshore sediment transport, sediment transport models, coastal zone

Procedia PDF Downloads 387
17063 The Feasibility of Glycerol Steam Reforming in an Industrial Sized Fixed Bed Reactor Using Computational Fluid Dynamic (CFD) Simulations

Authors: Mahendra Singh, Narasimhareddy Ravuru

Abstract:

For the past decade, the production of biodiesel has significantly increased along with its by-product, glycerol. Biodiesel-derived glycerol massive entry into the glycerol market has caused its value to plummet. Newer ways to utilize the glycerol by-product must be implemented or the biodiesel industry will face serious economic problems. The biodiesel industry should consider steam reforming glycerol to produce hydrogen gas. Steam reforming is the most efficient way of producing hydrogen and there is a lot of demand for it in the petroleum and chemical industries. This study investigates the feasibility of glycerol steam reforming in an industrial sized fixed bed reactor. In this paper, using computational fluid dynamic (CFD) simulations, the extent of the transport resistances that would occur in an industrial sized reactor can be visualized. An important parameter in reactor design is the size of the catalyst particle. The size of the catalyst cannot be too large where transport resistances are too high, but also not too small where an extraordinary amount of pressure drop occurs. The goal of this paper is to find the best catalyst size under various flow rates that will result in the highest conversion. Computational fluid dynamics simulated the transport resistances and a pseudo-homogenous reactor model was used to evaluate the pressure drop and conversion. CFD simulations showed that glycerol steam reforming has strong internal diffusion resistances resulting in extremely low effectiveness factors. In the pseudo-homogenous reactor model, the highest conversion obtained with a Reynolds number of 100 (29.5 kg/h) was 9.14% using a 1/6 inch catalyst diameter. Due to the low effectiveness factors and high carbon deposition rates, a fluidized bed is recommended as the appropriate reactor to carry out glycerol steam reforming.

Keywords: computational fluid dynamic, fixed bed reactor, glycerol, steam reforming, biodiesel

Procedia PDF Downloads 308
17062 Diagnostic Assessment for Mastery Learning of Engineering Students with a Bayesian Network Model

Authors: Zhidong Zhang, Yingchen Yang

Abstract:

In this study, a diagnostic assessment model for Mastery Engineering Learning was established based on a group of undergraduate students who studied in an engineering course. A diagnostic assessment model can examine both students' learning process and report achievement results. One very unique characteristic is that the diagnostic assessment model can recognize the errors and anything blocking students in their learning processes. The feedback is provided to help students to know how to solve the learning problems with alternative strategies and help the instructor to find alternative pedagogical strategies in the instructional designs. Dynamics is a core course in which is a common course being shared by several engineering programs. This course is a very challenging for engineering students to solve the problems. Thus knowledge acquisition and problem-solving skills are crucial for student success. Therefore, developing an effective and valid assessment model for student learning are of great importance. Diagnostic assessment is such a model which can provide effective feedback for both students and instructor in the mastery of engineering learning.

Keywords: diagnostic assessment, mastery learning, engineering, bayesian network model, learning processes

Procedia PDF Downloads 152
17061 Modelling Residential Space Heating Energy for Romania

Authors: Ion Smeureanu, Adriana Reveiu, Marian Dardala, Titus Felix Furtuna, Roman Kanala

Abstract:

This paper proposes a linear model for optimizing domestic energy consumption, in Romania. Both techno-economic and consumer behavior approaches have been considered, in order to develop the model. The proposed model aims to reduce the energy consumption, in households, by assembling in a unitary model, aspects concerning: residential lighting, space heating, hot water, and combined space heating – hot water, space cooling, and passenger transport. This paper focuses on space heating domestic energy consumption model, and quantify not only technical-economic issues, but also consumer behavior impact, related to people decision to envelope and insulate buildings, in order to minimize energy consumption.

Keywords: consumer behavior, open source energy modeling system (OSeMOSYS), MARKAL/TIMES Romanian energy model, virtual technologies

Procedia PDF Downloads 543
17060 Ecosystem Model for Environmental Applications

Authors: Cristina Schreiner, Romeo Ciobanu, Marius Pislaru

Abstract:

This paper aims to build a system based on fuzzy models that can be implemented in the assessment of ecological systems, to determine appropriate methods of action for reducing adverse effects on environmental and implicit the population. The model proposed provides new perspective for environmental assessment, and it can be used as a practical instrument for decision-making.

Keywords: ecosystem model, environmental security, fuzzy logic, sustainability of habitable regions

Procedia PDF Downloads 420
17059 Mathematical and Numerical Analysis of a Nonlinear Cross Diffusion System

Authors: Hassan Al Salman

Abstract:

We consider a nonlinear parabolic cross diffusion model arising in applied mathematics. A fully practical piecewise linear finite element approximation of the model is studied. By using entropy-type inequalities and compactness arguments, existence of a global weak solution is proved. Providing further regularity of the solution of the model, some uniqueness results and error estimates are established. Finally, some numerical experiments are performed.

Keywords: cross diffusion model, entropy-type inequality, finite element approximation, numerical analysis

Procedia PDF Downloads 383
17058 ACBM: Attention-Based CNN and Bi-LSTM Model for Continuous Identity Authentication

Authors: Rui Mao, Heming Ji, Xiaoyu Wang

Abstract:

Keystroke dynamics are widely used in identity recognition. It has the advantage that the individual typing rhythm is difficult to imitate. It also supports continuous authentication through the keyboard without extra devices. The existing keystroke dynamics authentication methods based on machine learning have a drawback in supporting relatively complex scenarios with massive data. There are drawbacks to both feature extraction and model optimization in these methods. To overcome the above weakness, an authentication model of keystroke dynamics based on deep learning is proposed. The model uses feature vectors formed by keystroke content and keystroke time. It ensures efficient continuous authentication by cooperating attention mechanisms with the combination of CNN and Bi-LSTM. The model has been tested with Open Data Buffalo dataset, and the result shows that the FRR is 3.09%, FAR is 3.03%, and EER is 4.23%. This proves that the model is efficient and accurate on continuous authentication.

Keywords: keystroke dynamics, identity authentication, deep learning, CNN, LSTM

Procedia PDF Downloads 155
17057 Ex Vivo Permeation Comparison Study of Flurbiprofen from Nanoparticles through Human Skin

Authors: Sheimah El Bejjaji, Lara Gorsek, Chandler Quilchez, Joaquim Suñer, Mireia Mallandrich

Abstract:

Flurbiprofen is an anti-inflammatory drug used in several treatments. The purpose of this study was to compare the permeation of two different formulations of flurbiprofen through the human skin. The first formulation was a solution of flurbiprofen dissolved with polyethylene glycol 3350 (PEG 3350). The second formulation was flurbiprofen encapsulated in poly-ɛ-caprolactone (PɛCL) nanoparticles (NPs), stabilized with poloxamer 188, submitted individually for freeze-drying with PEG 3350 as a cryoprotectant and sterilized by gamma-irradiation. Human skin was obtained from the abdominal region of a healthy patient. The experimental protocol was approved by the Bioethics Committee of Barcelona SCIAS Hospital (Spain), and they obtained the written informed consent forms. After being frozen to -20ºC, the skin samples were cut with a dermatome at 400 µm. The ex vivo permeation study was performed in Franz diffusion cells with a diffusion area of 2.54 cm². Skin samples were placed between two compartment sites, the dermal side in contact with the receptor medium and the epidermis side in contact with the donor chamber to which the formulation was applied. The permeation study was conducted for 24 hours at 32 ± 0.5 °C in accordance with sink conditions. The results were analyzed with an unpaired t-test, and the p-values indicate the formulation with nanoparticles had a higher permeability coefficient, flux, partition parameter, diffusion parameter, and lag time. The applicability of this formulation topically can benefit articulations and ligament inflammation as an alternative to oral drugs.

Keywords: anti-inflammatory drug, flurbiprofen, human skin, nanoparticles, skin permeation

Procedia PDF Downloads 90
17056 Energy and Exergy Analysis of Anode-Supported and Electrolyte–Supported Solid Oxide Fuel Cells Gas Turbine Power System

Authors: Abdulrazzak Akroot, Lutfu Namli

Abstract:

Solid oxide fuel cells (SOFCs) are one of the most promising technologies since they can produce electricity directly from fuel and generate a lot of waste heat that is generally used in the gas turbines to promote the general performance of the thermal power plant. In this study, the energy, and exergy analysis of a solid oxide fuel cell/gas turbine hybrid system was proceed in MATLAB to examine the performance characteristics of the hybrid system in two different configurations: anode-supported model and electrolyte-supported model. The obtained results indicate that if the fuel utilization factor reduces from 0.85 to 0.65, the overall efficiency decreases from 64.61 to 59.27% for the anode-supported model whereas it reduces from 58.3 to 56.4% for the electrolyte-supported model. Besides, the overall exergy reduces from 53.86 to 44.06% for the anode-supported model whereas it reduces from 39.96 to 33.94% for the electrolyte-supported model. Furthermore, increasing the air utilization factor has a negative impact on the electrical power output and the efficiencies of the overall system due to the reduction in the O₂ concentration at the cathode-electrolyte interface.

Keywords: solid oxide fuel cell, anode-supported model, electrolyte-supported model, energy analysis, exergy analysis

Procedia PDF Downloads 152
17055 Analysis of Reflection Coefficients of Reflected and Transmitted Waves at the Interface Between Viscous Fluid and Hygro-Thermo-Orthotropic Medium

Authors: Anand Kumar Yadav

Abstract:

Purpose – The purpose of this paper is to investigate the fluctuation of amplitude ratios of various transmitted and reflected waves. Design/methodology/approach – The reflection and transmission of plane waves on the interface between an orthotropic hygro-thermo-elastic half-space (OHTHS) and a viscous-fluid half-space (VFHS) were investigated in this study with reference to coupled hygro-thermo-elasticity. Findings – The interface, where y = 0, is struck by the principal (P) plane waves as they travel through the VFHS. Two waves are reflected in VFHS, and four waves are transmitted in OHTHS as a result namely longitudinal displacement, Pwave − , thermal diffusion TDwave − and moisture diffusion mDwave − and shear vertical SV wave. Expressions for the reflection and transmitted coefficient are developed for the incidence of a hygrothermal plane wave. It is noted that these ratios are graphically displayed and are observed under the influence of coupled hygro-thermo-elasticity. Research limitations/implications – There isn't much study on the model under consideration, which combines OHTHS and VFHS with coupled hygro-thermo-elasticity, according to the existing literature Practical implications – The current model can be applied in many different areas, such as soil dynamics, nuclear reactors, high particle accelerators, earthquake engineering, and other areas where linked hygrothermo-elasticity is important. In a range of technical and geophysical settings, wave propagation in a viscous fluid-thermoelastic medium with various characteristics, such as initial stress, magnetic field, porosity, temperature, etc., gives essential information regarding the presence of new and modified waves. This model may prove useful in modifying earthquake estimates for experimental seismologists, new material designers, and researchers. Social implications – Researchers may use coupled hygro-thermo-elasticity to categories the material, where the parameter is a new indication of its ability to conduct heat in interaction with diverse materials. Originality/value – The submitted text is the sole creation of the team of writers, and all authors equally contributed to its creation.

Keywords: hygro-thermo-elasticity, viscous fluid, reflection coefficient, transmission coefficient, moisture concentration

Procedia PDF Downloads 66
17054 Numerical Modeling of Storm Swells in Harbor by Boussinesq Equations Model

Authors: Mustapha Kamel Mihoubi, Hocine Dahmani

Abstract:

The purpose of work is to study the phenomenon of agitation of storm waves at basin caused by different directions of waves relative to the current provision thrown numerical model based on the equation in shallow water using Boussinesq model MIKE 21 BW. According to the diminishing effect of penetration of a wave optimal solution will be available to be reproduced in reduced model. Another alternative arrangement throws will be proposed to reduce the agitation and the effects of the swell reflection caused by the penetration of waves in the harbor.

Keywords: agitation, Boussinesq equations, combination, harbor

Procedia PDF Downloads 389
17053 Bottling the Darkness of Inner Life: Considering the Origins of Model Psychosis

Authors: Matthew Perkins-McVey

Abstract:

The pharmacological arm of mental health treatment is in a state of crisis. The promises of the Prozac century have fallen short; the number of different therapeutically significant medications that successfully complete development shrinks with every passing year, and the demand for better treatments only grows. Answering these hardships is a renewed optimism concerning the efficacy of controlled psychedelic therapy, a renaissance that has seen the return of a familiar concept: intoxication as a model psychosis. First appearing in the mid-19th century and featuring in an array of 20th century efforts in psychedelic research, model psychosis has, once more, come to the foreground of psychedelic research. And yet, little has been made of where this peculiar, perhaps even intoxicatingly mad, the idea originates. This paper seeks to uncover the conceptual foundations underlying the early emergence of model psychosis. This narrative will explore the conceptual foundations behind their independent development of the concept of model psychosis, considering their similarities and differences. In the course of this examination, it becomes apparent that the definition of endogenous psychosis, which formed in the mid-19th century, is the direct product of emerging understandings of exogenous psychosis, or model psychosis. Ultimately, the goal is not merely to understand how and why model psychosis became thinkable but to examine how seemingly secondary concept changes can engender new ways of being a psychiatric subject.

Keywords: history of psychiatry, model psychosis, history of medicine, history of science

Procedia PDF Downloads 89
17052 An Agent-Based Model of Innovation Diffusion Using Heterogeneous Social Interaction and Preference

Authors: Jang kyun Cho, Jeong-dong Lee

Abstract:

The advent of the Internet, mobile communications, and social network services has stimulated social interactions among consumers, allowing people to affect one another’s innovation adoptions by exchanging information more frequently and more quickly. Previous diffusion models, such as the Bass model, however, face limitations in reflecting such recent phenomena in society. These models are weak in their ability to model interactions between agents; they model aggregated-level behaviors only. The agent based model, which is an alternative to the aggregate model, is good for individual modeling, but it is still not based on an economic perspective of social interactions so far. This study assumes the presence of social utility from other consumers in the adoption of innovation and investigates the effect of individual interactions on innovation diffusion by developing a new model called the interaction-based diffusion model. By comparing this model with previous diffusion models, the study also examines how the proposed model explains innovation diffusion from the perspective of economics. In addition, the study recommends the use of a small-world network topology instead of cellular automata to describe innovation diffusion. This study develops a model based on individual preference and heterogeneous social interactions using utility specification, which is expandable and, thus, able to encompass various issues in diffusion research, such as reservation price. Furthermore, the study proposes a new framework to forecast aggregated-level market demand from individual level modeling. The model also exhibits a good fit to real market data. It is expected that the study will contribute to our understanding of the innovation diffusion process through its microeconomic theoretical approach.

Keywords: innovation diffusion, agent based model, small-world network, demand forecasting

Procedia PDF Downloads 341
17051 Bioavailability of Zinc to Wheat Grown in the Calcareous Soils of Iraqi Kurdistan

Authors: Muhammed Saeed Rasheed

Abstract:

Knowledge of the zinc and phytic acid (PA) concentrations of staple cereal crops are essential when evaluating the nutritional health of national and regional populations. In the present study, a total of 120 farmers’ fields in Iraqi Kurdistan were surveyed for zinc status in soil and wheat grain samples; wheat is the staple carbohydrate source in the region. Soils were analysed for total concentrations of phosphorus (PT) and zinc (ZnT), available P (POlsen) and Zn (ZnDTPA) and for pH. Average values (mg kg-1) ranged between 403-3740 (PT), 42.0-203 (ZnT), 2.13-28.1 (POlsen) and 0.14-5.23 (ZnDTPA); pH was in the range 7.46-8.67. The concentrations of Zn, PA/Zn molar ratio and estimated Zn bioavailability were also determined in wheat grain. The ranges of Zn and PA concentrations (mg kg⁻¹) were 12.3-63.2 and 5400 – 9300, respectively, giving a PA/Zn molar ratio of 15.7-30.6. A trivariate model was used to estimate intake of bioaccessible Zn, employing the following parameter values: (i) maximum Zn absorption = 0.09 (AMAX), (ii) equilibrium dissociation constant of zinc-receptor binding reaction = 0.680 (KP), and (iii) equilibrium dissociation constant of Zn-PA binding reaction = 0.033 (KR). In the model, total daily absorbed Zn (TAZ) (mg d⁻¹) as a function of total daily nutritional PA (mmole d⁻¹) and total daily nutritional Zn (mmole Zn d⁻¹) was estimated assuming an average wheat flour consumption of 300 g day⁻¹ in the region. Consideration of the PA and Zn intake suggest only 21.5±2.9% of grain Zn is bioavailable so that the effective Zn intake from wheat is only 1.84-2.63 mg d-1 for the local population. Overall results suggest available dietary Zn is below recommended levels (11 mg d⁻¹), partly due to low uptake by wheat but also due to the presence of large concentrations of PA in wheat grains. A crop breeding program combined with enhanced agronomic management methods is needed to enhance both Zn uptake and bioavailability in grains of cultivated wheat types.

Keywords: phosphorus, zinc, phytic acid, phytic acid to zinc molar ratio, zinc bioavailability

Procedia PDF Downloads 123
17050 Generalized Extreme Value Regression with Binary Dependent Variable: An Application for Predicting Meteorological Drought Probabilities

Authors: Retius Chifurira

Abstract:

Logistic regression model is the most used regression model to predict meteorological drought probabilities. When the dependent variable is extreme, the logistic model fails to adequately capture drought probabilities. In order to adequately predict drought probabilities, we use the generalized linear model (GLM) with the quantile function of the generalized extreme value distribution (GEVD) as the link function. The method maximum likelihood estimation is used to estimate the parameters of the generalized extreme value (GEV) regression model. We compare the performance of the logistic and the GEV regression models in predicting drought probabilities for Zimbabwe. The performance of the regression models are assessed using the goodness-of-fit tests, namely; relative root mean square error (RRMSE) and relative mean absolute error (RMAE). Results show that the GEV regression model performs better than the logistic model, thereby providing a good alternative candidate for predicting drought probabilities. This paper provides the first application of GLM derived from extreme value theory to predict drought probabilities for a drought-prone country such as Zimbabwe.

Keywords: generalized extreme value distribution, general linear model, mean annual rainfall, meteorological drought probabilities

Procedia PDF Downloads 200
17049 Assessing the Theoretical Suitability of Sentinel-2 and Worldview-3 Data for Hydrocarbon Mapping of Spill Events, Using Hydrocarbon Spectral Slope Model

Authors: K. Tunde Olagunju, C. Scott Allen, Freek Van Der Meer

Abstract:

Identification of hydrocarbon oil in remote sensing images is often the first step in monitoring oil during spill events. Most remote sensing methods adopt techniques for hydrocarbon identification to achieve detection in order to model an appropriate cleanup program. Identification on optical sensors does not only allow for detection but also for characterization and quantification. Until recently, in optical remote sensing, quantification and characterization are only potentially possible using high-resolution laboratory and airborne imaging spectrometers (hyperspectral data). Unlike multispectral, hyperspectral data are not freely available, as this data category is mainly obtained via airborne survey at present. In this research, two (2) operational high-resolution multispectral satellites (WorldView-3 and Sentinel-2) are theoretically assessed for their suitability for hydrocarbon characterization, using the hydrocarbon spectral slope model (HYSS). This method utilized the two most persistent hydrocarbon diagnostic/absorption features at 1.73 µm and 2.30 µm for hydrocarbon mapping on multispectral data. In this research, spectra measurement of seven (7) different hydrocarbon oils (crude and refined oil) taken on ten (10) different substrates with the use of laboratory ASD Fieldspec were convolved to Sentinel-2 and WorldView-3 resolution, using their full width half maximum (FWHM) parameter. The resulting hydrocarbon slope values obtained from the studied samples enable clear qualitative discrimination of most hydrocarbons, despite the presence of different background substrates, particularly on WorldView-3. Due to close conformity of central wavelengths and narrow bandwidths to key hydrocarbon bands used in HYSS, the statistical significance for qualitative analysis on WorldView-3 sensors for all studied hydrocarbon oil returned with 95% confidence level (P-value ˂ 0.01), except for Diesel. Using multifactor analysis of variance (MANOVA), the discriminating power of HYSS is statistically significant for most hydrocarbon-substrate combinations on Sentinel-2 and WorldView-3 FWHM, revealing the potential of these two operational multispectral sensors as rapid response tools for hydrocarbon mapping. One notable exception is highly transmissive hydrocarbons on Sentinel-2 data due to the non-conformity of spectral bands with key hydrocarbon absorptions and the relatively coarse bandwidth (> 100 nm).

Keywords: hydrocarbon, oil spill, remote sensing, hyperspectral, multispectral, hydrocarbon-substrate combination, Sentinel-2, WorldView-3

Procedia PDF Downloads 216
17048 Experimental Investigation of Gas Bubble Behaviours in a Domestic Heat Pump Water Heating System

Authors: J. B. Qin, X. H. Jiang, Y. T. Ge

Abstract:

The growing awareness of global warming potential has internationally aroused interest and demand in reducing greenhouse gas emissions produced by human activity. Much national energy in the UK had been consumed in the residential sector mainly for space heating and domestic hot water production. Currently, gas boilers are mostly applied in the domestic water heating which contribute significantly to excessive CO2 emissions and consumption of primary energy resources. The issues can be solved by popularizing heat pump systems that are attributable to higher performance efficiency than those of traditional gas boilers. Even so, the heat pump system performance can be further enhanced if the dissolved gases in its hot water circuit can be efficiently discharged.  To achieve this target, the bubble behaviors in the heat pump water heating system need to be extensively investigated. In this paper, by varying different experimental conditions, the effects of various heat pump hot water side parameters on gas microbubble diameters were measured and analyzed. Correspondingly, the effect of each parameter has been investigated. These include varied system pressures, water flow rates, saturation ratios and heat outputs. The results measurement showed that the water flow rate is the most significant parameter to influence on gas microbubble productions. The research outcomes can significantly contribute to the understanding of gas bubble behaviors at domestic heat pump water heating systems and thus the efficient way for the discharging of the associated dissolved gases.  

Keywords: heat pump water heating system, microbubble formation, dissolved gases in water, effectiveness

Procedia PDF Downloads 266