Search results for: boundary measure formula
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4926

Search results for: boundary measure formula

4476 Evaluating the Total Costs of a Ransomware-Resilient Architecture for Healthcare Systems

Authors: Sreejith Gopinath, Aspen Olmsted

Abstract:

This paper is based on our previous work that proposed a risk-transference-based architecture for healthcare systems to store sensitive data outside the system boundary, rendering the system unattractive to would-be bad actors. This architecture also allows a compromised system to be abandoned and a new system instance spun up in place to ensure business continuity without paying a ransom or engaging with a bad actor. This paper delves into the details of various attacks we simulated against the prototype system. In the paper, we discuss at length the time and computational costs associated with storing and retrieving data in the prototype system, abandoning a compromised system, and setting up a new instance with existing data. Lastly, we simulate some analytical workloads over the data stored in our specialized data storage system and discuss the time and computational costs associated with running analytics over data in a specialized storage system outside the system boundary. In summary, this paper discusses the total costs of data storage, access, and analytics incurred with the proposed architecture.

Keywords: cybersecurity, healthcare, ransomware, resilience, risk transference

Procedia PDF Downloads 117
4475 Portfolio Optimization with Reward-Risk Ratio Measure Based on the Mean Absolute Deviation

Authors: Wlodzimierz Ogryczak, Michal Przyluski, Tomasz Sliwinski

Abstract:

In problems of portfolio selection, the reward-risk ratio criterion is optimized to search for a risky portfolio with the maximum increase of the mean return in proportion to the risk measure increase when compared to the risk-free investments. In the classical model, following Markowitz, the risk is measured by the variance thus representing the Sharpe ratio optimization and leading to the quadratic optimization problems. Several Linear Programming (LP) computable risk measures have been introduced and applied in portfolio optimization. In particular, the Mean Absolute Deviation (MAD) measure has been widely recognized. The reward-risk ratio optimization with the MAD measure can be transformed into the LP formulation with the number of constraints proportional to the number of scenarios and the number of variables proportional to the total of the number of scenarios and the number of instruments. This may lead to the LP models with huge number of variables and constraints in the case of real-life financial decisions based on several thousands scenarios, thus decreasing their computational efficiency and making them hardly solvable by general LP tools. We show that the computational efficiency can be then dramatically improved by an alternative model based on the inverse risk-reward ratio minimization and by taking advantages of the LP duality. In the introduced LP model the number of structural constraints is proportional to the number of instruments thus not affecting seriously the simplex method efficiency by the number of scenarios and therefore guaranteeing easy solvability. Moreover, we show that under natural restriction on the target value the MAD risk-reward ratio optimization is consistent with the second order stochastic dominance rules.

Keywords: portfolio optimization, reward-risk ratio, mean absolute deviation, linear programming

Procedia PDF Downloads 389
4474 Effect of Radiation on MHD Mixed Convection Stagnation Point Flow towards a Vertical Plate in a Porous Medium with Convective Boundary Condition

Authors: H. Niranjan, S. Sivasankaran, Zailan Siri

Abstract:

This study investigates mixed convection heat transfer about a thin vertical plate in the presence of magnetohydrodynamic (MHD) and heat transfer effects in the porous medium. The fluid is assumed to be steady, laminar, incompressible and in two-dimensional flow. The nonlinear coupled parabolic partial differential equations governing the flow are transformed into the non-similar boundary layer equations, which are then solved numerically using the shooting method. The effects of the conjugate heat transfer parameter, the porous medium parameter, the permeability parameter, the mixed convection parameter, the magnetic parameter, and the thermal radiation on the velocity and temperature profiles as well as on the local skin friction and local heat transfer are presented and analyzed. The validity of the methodology and analysis is checked by comparing the results obtained for some specific cases with those available in the literature. The various parameters on local skin friction, heat and mass transfer rates are presented in tabular form.

Keywords: MHD, porous medium, soret/dufour, stagnation-point

Procedia PDF Downloads 359
4473 On Transferring of Transient Signals along Hollow Waveguide

Authors: E. Eroglu, S. Semsit, E. Sener, U.S. Sener

Abstract:

In Electromagnetics, there are three canonical boundary value problem with given initial conditions for the electromagnetic field sought, namely: Cavity Problem, Waveguide Problem, and External Problem. The Cavity Problem and Waveguide Problem were rigorously studied and new results were arised at original works in the past decades. In based on studies of an analytical time domain method Evolutionary Approach to Electromagnetics (EAE), electromagnetic field strength vectors produced by a time dependent source function are sought. The fields are took place in L2 Hilbert space. The source function that performs signal transferring, energy and surplus of energy has been demonstrated with all clarity. Depth of the method and ease of applications are emerged needs of gathering obtained results. Main discussion is about perfect electric conductor and hollow waveguide. Even if well studied time-domain modes problems are mentioned, specifically, the modes which have a hollow (i.e., medium-free) cross-section domain are considered.

Keywords: evolutionary approach to electromagnetics, time-domain waveguide mode, Neumann problem, Dirichlet boundary value problem, Klein-Gordon

Procedia PDF Downloads 311
4472 Measurement of Intellectual Capital in an Algerian Company

Authors: S. Brahmi, S. Aitouche, M. D. Mouss

Abstract:

Every modern company should measure the value of its intellectual capital and to report to complement the traditional annual balance sheets. The purpose of this work is to measure the intellectual capital in an Algerian company (or production system) using the Weightless Wealth Tool Kit (WWTK). The results of the measurement of intellectual capital are supplemented by traditional financial ratios. The measurement was applied to the National Company of Wells Services (ENSP) in Hassi Messaoud city, in the south of Algeria. We calculated the intellectual capital (intangible resources) of the ENSP to help the organization to better capitalize on its potential of workers and their know-how. The intangible value of the ENSP is evaluated at 16,936,173,345 DA in 2015.

Keywords: financial valuation, intangible capital, intellectual capital, intellectual capital measurement

Procedia PDF Downloads 269
4471 Postmortem Magnetic Resonance Imaging as an Objective Method for the Differential Diagnosis of a Stillborn and a Neonatal Death

Authors: Uliana N. Tumanova, Sergey M. Voevodin, Veronica A. Sinitsyna, Alexandr I. Shchegolev

Abstract:

An important part of forensic and autopsy research in perinatology is the answer to the question of life and stillbirth. Postmortem magnetic resonance imaging (MRI) is an objective non-invasive research method that allows to store data for a long time and not to exhume the body to clarify the diagnosis. The purpose of the research is to study the possibilities of a postmortem MRI to determine the stillbirth and death of a newborn who had spontaneous breathing and died on the first day after birth. MRI and morphological data of a study of 23 stillborn bodies, prenatally dead at a gestational age of 22-39 weeks (Group I) and the bodies of 16 newborns who died from 2 to 24 hours after birth (Group II) were compared. Before the autopsy, postmortem MRI was performed on the Siemens Magnetom Verio 3T device in the supine position of the body. The control group for MRI studies consisted of 7 live newborns without lung disease (Group III). On T2WI in the sagittal projection was measured MR-signal intensity (SI) in the lung tissue (L) and shoulder muscle (M). During the autopsy, a pulmonary swimming test was evaluated, and macro- and microscopic studies were performed. According to the postmortem MRI, the highest values of mean SI of the lung (430 ± 27.99) and of the muscle (405.5 ± 38.62) on T2WI were detected in group I and exceeded the corresponding value of group II by 2.7 times. The lowest values were found in the control group - 77.9 ± 12.34 and 119.7 ± 6.3, respectively. In the group II, the lung SI was 1.6 times higher than the muscle SI, whereas in the group I and in the control group, the muscle SI was 2.1 times and 1.8 times larger than the lung. On the basis of clinical and morphological data, we calculated the formula for determining the breathing index (BI) during postmortem MRI: BI = SIL x SIM / 100. The mean value of BI in the group I (1801.14 ± 241.6) (values ranged from 756 to 3744) significantly higher than the corresponding average value of BI in the group II (455.89 ± 137.32, p < 0.05) (305-638.4). In the control group, the mean BI value was 91.75 ± 13.3 (values ranged from 53 to 154). The BI with the results of pulmonary swimming tests and microscopic examination of the lungs were compared. The boundary value of BI for the differential diagnosis of stillborn and newborn death was 700. Using the postmortem MRI allows to differentiate the stillborn with the death of the breathing newborn.

Keywords: lung, newborn, postmortem MRI, stillborn

Procedia PDF Downloads 113
4470 Sinusoidal Roughness Elements in a Square Cavity

Authors: Muhammad Yousaf, Shoaib Usman

Abstract:

Numerical studies were conducted using Lattice Boltzmann Method (LBM) to study the natural convection in a square cavity in the presence of roughness. An algorithm basedon a single relaxation time Bhatnagar-Gross-Krook (BGK) model of Lattice Boltzmann Method (LBM) was developed. Roughness was introduced on both the hot and cold walls in the form of sinusoidal roughness elements. The study was conducted for a Newtonian fluid of Prandtl number (Pr) 1.0. The range of Ra number was explored from 103 to 106 in a laminar region. Thermal and hydrodynamic behavior of fluid was analyzed using a differentially heated square cavity with roughness elements present on both the hot and cold wall. Neumann boundary conditions were introduced on horizontal walls with vertical walls as isothermal. The roughness elements were at the same boundary condition as corresponding walls. Computational algorithm was validated against previous benchmark studies performed with different numerical methods, and a good agreement was found to exist. Results indicate that the maximum reduction in the average heat transfer was16.66 percent at Ra number 105.

Keywords: Lattice Boltzmann method, natural convection, nusselt number, rayleigh number, roughness

Procedia PDF Downloads 516
4469 Numerical Reproduction of Hemodynamic Change Induced by Acupuncture to ST-36

Authors: Takuya Suzuki, Atsushi Shirai, Takashi Seki

Abstract:

Acupuncture therapy is one of the treatments in traditional Chinese medicine. Recently, some reports have shown the effectiveness of acupuncture. However, its full acceptance has been hindered by the lack of understanding on mechanism of the therapy. Acupuncture applied to Zusanli (ST-36) enhances blood flow volume in superior mesenteric artery (SMA), yielding peripheral vascular resistance – regulated blood flow of SMA dominated by the parasympathetic system and inhibition of sympathetic system. In this study, a lumped-parameter approximation model of blood flow in the systemic arteries was developed. This model was extremely simple, consisting of the aorta, carotid arteries, arteries of the four limbs and SMA, and their peripheral vascular resistances. Here, the individual artery was simplified to a tapered tube and the resistances were modelled by a linear resistance. We numerically investigated contribution of the peripheral vascular resistance of SMA to the systemic blood distribution using this model. In addition to the upstream end of the model, which correlates with the left ventricle, two types of boundary condition were applied; mean left ventricular pressure which correlates with blood pressure (BP) and mean cardiac output which corresponds to cardiac index (CI). We examined it to reproduce the experimentally obtained hemodynamic change, in terms of the ratio of the aforementioned hemodynamic parameters from their initial values before the acupuncture, by regulating the peripheral vascular resistances and the upstream boundary condition. First, only the peripheral vascular resistance of SMA was changed to show contribution of the resistance to the change in blood flow volume in SMA, expecting reproduction of the experimentally obtained change. It was found, however, this was not enough to reproduce the experimental result. Then, we also changed the resistances of the other arteries together with the value given at upstream boundary. Here, the resistances of the other arteries were changed simultaneously in the same amount. Consequently, we successfully reproduced the hemodynamic change to find that regulation of the upstream boundary condition to the value experimentally obtained after the stimulation is necessary for the reproduction, though statistically significant changes in BP and CI were not observed in the experiment. It is generally known that sympathetic and parasympathetic tones take part in regulation of whole the systemic circulation including the cardiac function. The present result indicates that stimulation to ST-36 could induce vasodilation of peripheral circulation of SMA and vasoconstriction of that of other arteries. In addition, it implies that experimentally obtained small changes in BP and CI induced by the acupuncture may be involved in the therapeutic response.

Keywords: acupuncture, hemodynamics, lumped-parameter approximation, modeling, systemic vascular resistance

Procedia PDF Downloads 212
4468 Thermal Buckling Response of Cylindrical Panels with Higher Order Shear Deformation Theory—a Case Study with Angle-Ply Laminations

Authors: Humayun R. H. Kabir

Abstract:

An analytical solution before used for static and free-vibration response has been extended for thermal buckling response on cylindrical panel with anti-symmetric laminations. The partial differential equations that govern kinematic behavior of shells produce five coupled differential equations. The basic displacement and rotational unknowns are similar to first order shear deformation theory---three displacement in spatial space, and two rotations about in-plane axes. No drilling degree of freedom is considered. Boundary conditions are considered as complete hinge in all edges so that the panel respond on thermal inductions. Two sets of double Fourier series are considered in the analytical solution process. The sets are selected that satisfy mixed type of natural boundary conditions. Numerical results are presented for the first 10 eigenvalues, and first 10 mode shapes for Ux, Uy, and Uz components. The numerical results are compared with a finite element based solution.

Keywords: higher order shear deformation, composite, thermal buckling, angle-ply laminations

Procedia PDF Downloads 359
4467 Sediment Patterns from Fluid-Bed Interactions: A Direct Numerical Simulations Study on Fluvial Turbulent Flows

Authors: Nadim Zgheib, Sivaramakrishnan Balachandar

Abstract:

We present results on the initial formation of ripples from an initially flattened erodible bed. We use direct numerical simulations (DNS) of turbulent open channel flow over a fixed sinusoidal bed coupled with hydrodynamic stability analysis. We use the direct forcing immersed boundary method to account for the presence of the sediment bed. The resolved flow provides the bed shear stress and consequently the sediment transport rate, which is needed in the stability analysis of the Exner equation. The approach is different from traditional linear stability analysis in the sense that the phase lag between the bed topology, and the sediment flux is obtained from the DNS. We ran 11 simulations at a fixed shear Reynolds number of 180, but for different sediment bed wavelengths. The analysis allows us to sweep a large range of physical and modelling parameters to predict their effects on linear growth. The Froude number appears to be the critical controlling parameter in the early linear development of ripples, in contrast with the dominant role of particle Reynolds number during the equilibrium stage.

Keywords: direct numerical simulation, immersed boundary method, sediment-bed interactions, turbulent multiphase flow, linear stability analysis

Procedia PDF Downloads 166
4466 Combining the Fictitious Stress Method and Displacement Discontinuity Method in Solving Crack Problems in Anisotropic Material

Authors: Bahatti̇n Ki̇mençe, Uğur Ki̇mençe

Abstract:

In this study, the purpose of obtaining the influence functions of the displacement discontinuity in an anisotropic elastic medium is to produce the boundary element equations. A Displacement Discontinuous Method formulation (DDM) is presented with the aim of modeling two-dimensional elastic fracture problems. This formulation is found by analytical integration of the fundamental solution along a straight-line crack. With this purpose, Kelvin's fundamental solutions for anisotropic media on an infinite plane are used to form dipoles from singular loads, and the various combinations of the said dipoles are used to obtain the influence functions of displacement discontinuity. This study introduces a technique for coupling Fictitious Stress Method (FSM) and DDM; the reason for applying this technique to some examples is to demonstrate the effectiveness of the proposed coupling method. In this study, displacement discontinuity equations are obtained by using dipole solutions calculated with known singular force solutions in an anisotropic medium. The displacement discontinuities method obtained from the solutions of these equations and the fictitious stress methods is combined and compared with various examples. In this study, one or more crack problems with various geometries in rectangular plates in finite and infinite regions, under the effect of tensile stress with coupled FSM and DDM in the anisotropic environment, were examined, and the effectiveness of the coupled method was demonstrated. Since crack problems can be modeled more easily with DDM, it has been observed that the use of DDM has increased recently. In obtaining the displacement discontinuity equations, Papkovitch functions were used in Crouch, and harmonic functions were chosen to satisfy various boundary conditions. A comparison is made between two indirect boundary element formulations, DDM, and an extension of FSM, for solving problems involving cracks. Several numerical examples are presented, and the outcomes are contrasted to existing analytical or reference outs.

Keywords: displacement discontinuity method, fictitious stress method, crack problems, anisotropic material

Procedia PDF Downloads 66
4465 Efficient Frontier: Comparing Different Volatility Estimators

Authors: Tea Poklepović, Zdravka Aljinović, Mario Matković

Abstract:

Modern Portfolio Theory (MPT) according to Markowitz states that investors form mean-variance efficient portfolios which maximizes their utility. Markowitz proposed the standard deviation as a simple measure for portfolio risk and the lower semi-variance as the only risk measure of interest to rational investors. This paper uses a third volatility estimator based on intraday data and compares three efficient frontiers on the Croatian Stock Market. The results show that range-based volatility estimator outperforms both mean-variance and lower semi-variance model.

Keywords: variance, lower semi-variance, range-based volatility, MPT

Procedia PDF Downloads 498
4464 Transfer Rate of Organic Water Contaminants through a Passive Sampler Membrane of Polyethersulfone (PES)

Authors: Hamidreza Sharifan, Audra Morse

Abstract:

Accurate assessments of contaminant concentrations based on traditional grab sampling methods are not always possible. Passive samplers offer an attractive alternative to traditional sampling methods that overcomes these limitations. The POCIS approach has been used as a screening tool for determining the presence/absence, possible sources and relative amounts of organic compounds at field sites. The objective for the present research is on mass transfer of five water contaminants (atrazine, caffeine, bentazon, ibuprofen, atenolol) through the Water Boundary Layer (WBL) and membrane. More specific objectives followed by establishing a relationship between the sampling rate and water solubility of the compounds, as well as comparing the molecular weight of the compounds and concentration of the compounds at the time of equilibrium. To determine whether water boundary layer effects transport rate through the membrane is another main objective in this paper. After GC mass analysis of compounds, regarding the WBL effect in this experiment, Sherwood number for the experimental tank developed. A close relationship between feed concentration of compound and sampling rate has been observed.

Keywords: passive sampler, water contaminants, PES-transfer rate, contaminant concentrations

Procedia PDF Downloads 442
4463 Topology Optimization of the Interior Structures of Beams under Various Load and Support Conditions with Solid Isotropic Material with Penalization Method

Authors: Omer Oral, Y. Emre Yilmaz

Abstract:

Topology optimization is an approach that optimizes material distribution within a given design space for a certain load and boundary conditions by providing performance goals. It uses various restrictions such as boundary conditions, set of loads, and constraints to maximize the performance of the system. It is different than size and shape optimization methods, but it reserves some features of both methods. In this study, interior structures of the parts were optimized by using SIMP (Solid Isotropic Material with Penalization) method. The volume of the part was preassigned parameter and minimum deflection was the objective function. The basic idea behind the theory was considered, and different methods were discussed. Rhinoceros 3D design tool was used with Grasshopper and TopOpt plugins to create and optimize parts. A Grasshopper algorithm was designed and tested for different beams, set of arbitrary located forces and support types such as pinned, fixed, etc. Finally, 2.5D shapes were obtained and verified by observing the changes in density function.

Keywords: Grasshopper, lattice structure, microstructures, Rhinoceros, solid isotropic material with penalization method, TopOpt, topology optimization

Procedia PDF Downloads 115
4462 Developing Fault Tolerance Metrics of Web and Mobile Applications

Authors: Ahmad Mohsin, Irfan Raza Naqvi, Syda Fatima Usamn

Abstract:

Applications with higher fault tolerance index are considered more reliable and trustworthy to drive quality. In recent years application development has been shifted from traditional desktop and web to native and hybrid application(s) for the web and mobile platforms. With the emergence of Internet of things IOTs, cloud and big data trends, the need for measuring Fault Tolerance for these complex nature applications has increased to evaluate their performance. There is a phenomenal gap between fault tolerance metrics development and measurement. Classic quality metric models focused on metrics for traditional systems ignoring the essence of today’s applications software, hardware & deployment characteristics. In this paper, we have proposed simple metrics to measure fault tolerance considering general requirements for Web and Mobile Applications. We have aligned factors – subfactors, using GQM for metrics development considering the nature of mobile we apps. Systematic Mathematical formulation is done to measure metrics quantitatively. Three web mobile applications are selected to measure Fault Tolerance factors using formulated metrics. Applications are then analysed on the basis of results from observations in a controlled environment on different mobile devices. Quantitative results are presented depicting Fault tolerance in respective applications.

Keywords: web and mobile applications, reliability, fault tolerance metric, quality metrics, GQM based metrics

Procedia PDF Downloads 326
4461 Determination of the Runoff Coefficient in Urban Regions, an Example from Haifa, Israel

Authors: Ayal Siegel, Moshe Inbar, Amatzya Peled

Abstract:

This study examined the characteristic runoff coefficient in different urban areas. The main area studied is located in the city of Haifa, northern Israel. Haifa spreads out eastward from the Mediterranean seacoast to the top of the Carmel Mountain range with an elevation of 300 m. above sea level. For this research project, four watersheds were chosen, each characterizing a different part of the city; 1) Upper Hadar, a spacious suburb on the upper mountain side; 2) Qiryat Eliezer, a crowded suburb on a level plane of the watershed; 3) Technion, a large technical research university which is located halfway between the top of the mountain range and the coast line. 4) Keret, a remote suburb, on the southwestern outskirts of Haifa. In all of the watersheds found suitable, instruments were installed to continuously measure the water level flowing in the channels. Three rainfall gauges scattered in the study area complete the hydrological requirements for this research project. The runoff coefficient C in peak discharge events was determined by the Rational Formula. The main research finding is the significant relationship between the intensity of rainfall, and the impervious area which is connected to the drainage system of the watershed. For less intense rainfall, the full potential of the connected impervious area will not be exploited. As a result, the runoff coefficient value decreases as do the peak discharge rate and the runoff yield from the storm event. The research results will enable application to other areas by means of hydrological model to be be set up on GIS software that will make it possible to estimate the runoff coefficient of any given city watershed.

Keywords: runoff coefficient, rational method, time of concentration, connected impervious area.

Procedia PDF Downloads 337
4460 Molecular Dynamics Simulation of Free Vibration of Graphene Sheets

Authors: Seyyed Feisal Asbaghian Namin, Reza Pilafkan, Mahmood Kaffash Irzarahimi

Abstract:

TThis paper considers vibration of single-layered graphene sheets using molecular dynamics (MD) and nonlocal elasticity theory. Based on the MD simulations, Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS), an open source software, is used to obtain fundamental frequencies. On the other hand, governing equations are derived using nonlocal elasticity and first order shear deformation theory (FSDT) and solved using generalized differential quadrature method (GDQ). The small-scale effect is applied in governing equations of motion by nonlocal parameter. The effect of different side lengths, boundary conditions and nonlocal parameter are inspected for aforementioned methods. Results are obtained from MD simulations is compared with those of the nonlocal elasticity theory to calculate appropriate values for the nonlocal parameter. The nonlocal parameter value is suggested for graphene sheets with various boundary conditions. Furthermore, it is shown that the nonlocal elasticity approach using classical plate theory (CLPT) assumptions overestimates the natural frequencies.

Keywords: graphene sheets, molecular dynamics simulations, fundamental frequencies, nonlocal elasticity theory, nonlocal parameter

Procedia PDF Downloads 499
4459 Aerodynamic Analysis of Dimple Effect on Aircraft Wing

Authors: E. Livya, G. Anitha, P. Valli

Abstract:

The main objective of aircraft aerodynamics is to enhance the aerodynamic characteristics and maneuverability of the aircraft. This enhancement includes the reduction in drag and stall phenomenon. The airfoil which contains dimples will have comparatively less drag than the plain airfoil. Introducing dimples on the aircraft wing will create turbulence by creating vortices which delays the boundary layer separation resulting in decrease of pressure drag and also increase in the angle of stall. In addition, wake reduction leads to reduction in acoustic emission. The overall objective of this paper is to improve the aircraft maneuverability by delaying the flow separation point at stall and thereby reducing the drag by applying the dimple effect over the aircraft wing. This project includes both computational and experimental analysis of dimple effect on aircraft wing, using NACA 0018 airfoil. Dimple shapes of Semi-sphere, hexagon, cylinder, square are selected for the analysis; airfoil is tested under the inlet velocity of 30m/s at different angle of attack (5˚, 10˚, 15˚, 20˚, and 25˚). This analysis favours the dimple effect by increasing L/D ratio and thereby providing the maximum aerodynamic efficiency, which provides the enhanced performance for the aircraft.

Keywords: airfoil, dimple effect, turbulence, boundary layer separation

Procedia PDF Downloads 517
4458 Reexamining Contrarian Trades as a Proxy of Informed Trades: Evidence from China's Stock Market

Authors: Dongqi Sun, Juan Tao, Yingying Wu

Abstract:

This paper reexamines the appropriateness of contrarian trades as a proxy of informed trades, using high frequency Chinese stock data. Employing this measure for 5 minute intervals, a U-shaped intraday pattern of probability of informed trades (PIN) is found for the CSI300 stocks, which is consistent with previous findings for other markets. However, while dividing the trades into different sizes, a reversed U-shaped PIN from large-sized trades, opposed to the U-shaped pattern for small- and medium-sized trades, is observed. Drawing from the mixed evidence with different trade sizes, the price impact of trades is further investigated. By examining the relationship between trade imbalances and unexpected returns, larges-sized trades are found to have significant price impact. This implies that in those intervals with large trades, it is non-contrarian trades that are more likely to be informed trades. Taking account of the price impact of large-sized trades, non-contrarian trades are used to proxy for informed trading in those intervals with large trades, and contrarian trades are still used to measure informed trading in other intervals. A stronger U-shaped PIN is demonstrated from this modification. Auto-correlation and information advantage tests for robustness also support the modified informed trading measure.

Keywords: contrarian trades, informed trading, price impact, trade imbalance

Procedia PDF Downloads 150
4457 Callous-Unemotional Traits in Preschoolers: Distinct Associations with Empathy Subcomponents

Authors: E. Stylianopoulou, A. K. Fanti

Abstract:

Object: Children scoring high on Callous-Unemotional traits (CU traits) exhibit lack of empathy. More specifically, children scoring high on CU traits appear to exhibit deficits on affective empathy or deficits in other constructs. However, little is known about cognitive empathy, and it's relation with CU traits in preschoolers. Despite the fact that empathy is measurable at a very young age, relatively less study has focused on empathy in preschoolers than older children with CU traits. The present study examines the cognitive and affective empathy in preschoolers with CU traits. The aim was to examine the differences between cognitive and affective empathy in those individuals. Based on previous research in children with CU traits, it was hypothesized that preschoolers scoring high in CU traits will show deficits in both cognitive and affective empathy; however, more deficits will be detected in affective empathy rather than cognitive empathy. Method: The sample size was 209 children, of which 109 were male, and 100 were female between the ages of 3 and 7 (M=4.73, SD=0.71). From those participants, only 175 completed all the items. The Inventory of Callous-Unemotional traits was used to measure CU traits. Moreover, the Griffith Empathy Measure (GEM) Affective Scale and the Griffith Empathy Measure (GEM) Cognitive Scale was used to measure Affective and Cognitive empathy, respectively. Results: Linear Regression was applied to examine the preceding hypotheses. The results showed that generally, there was a moderate negative association between CU traits and empathy, which was significant. More specifically, it has been found that there was a significant and negative moderate relation between CU traits and cognitive empathy. Surprisingly, results indicated that there was no significant relation between CU traits and affective empathy. Conclusion: The current findings support that preschoolers show deficits in understanding others emotions, indicating a significant association between CU traits and cognitive empathy. However, such a relation was not found between CU traits and affective empathy. The current results raised the importance that there is a need for focusing more on cognitive empathy in preschoolers with CU traits, a component that seems to be underestimated till now.

Keywords: affective empathy, callous-unemotional traits, cognitive empathy, preschoolers

Procedia PDF Downloads 133
4456 Modeling by Application of the Nernst-Planck Equation and Film Theory for Predicting of Chromium Salts through Nanofiltration Membrane

Authors: Aimad Oulebsir, Toufik Chaabane, Sivasankar Venkatramann, Andre Darchen, Rachida Maachi

Abstract:

The objective of this study is to propose a model for the prediction of the mechanism transfer of the trivalent ions through a nanofiltration membrane (NF) by introduction of the polarization concentration phenomenon and to study its influence on the retention of salts. This model is the combination of the Nernst-Planck equation and the equations of the film theory. This model is characterized by two transfer parameters: Reflection coefficient s and solute permeability Ps which are estimated numerically. The thickness of the boundary layer, δ, solute concentration at the membrane surface, Cm, and concentration profile in the polarization layer have also been estimated. The mathematical formulation suggested was established. The retentions of trivalent salts are estimated and compared with the experimental results. A comparison between the results with and without phenomena of polarization of concentration is made and the thickness of boundary layer alimentation side was given. Experimental and calculated results are shown to be in good agreement. The model is then success fully extended to experimental data reported in the literature.

Keywords: nanofiltration, concentration polarisation, chromium salts, mass transfer

Procedia PDF Downloads 269
4455 Thermal Instability in Rivlin-Ericksen Elastico-Viscous Nanofluid with Connective Boundary Condition: Effect of Vertical Throughflow

Authors: Shivani Saini

Abstract:

The effect of vertical throughflow on the onset of convection in Rivlin-Ericksen Elastico-Viscous nanofluid with convective boundary condition is investigated. The flow is stimulated with modified Darcy model under the assumption that the nanoparticle volume fraction is not actively managed on the boundaries. The heat conservation equation is formulated by introducing the convective term of nanoparticle flux. A linear stability analysis based upon normal mode is performed, and an approximate solution of eigenvalue problems is obtained using the Galerkin weighted residual method. Investigation of the dependence of the Rayleigh number on various viscous and nanofluid parameter is performed. It is found that through flow and nanofluid parameters hasten the convection while capacity ratio, kinematics viscoelasticity, and Vadasz number do not govern the stationary convection. Using the convective component of nanoparticle flux, critical wave number is the function of nanofluid parameters as well as the throughflow parameter. The obtained solution provides important physical insight into the behavior of this model.

Keywords: Darcy model, nanofluid, porous layer, throughflow

Procedia PDF Downloads 122
4454 Modified Clusterwise Regression for Pavement Management

Authors: Mukesh Khadka, Alexander Paz, Hanns de la Fuente-Mella

Abstract:

Typically, pavement performance models are developed in two steps: (i) pavement segments with similar characteristics are grouped together to form a cluster, and (ii) the corresponding performance models are developed using statistical techniques. A challenge is to select the characteristics that define clusters and the segments associated with them. If inappropriate characteristics are used, clusters may include homogeneous segments with different performance behavior or heterogeneous segments with similar performance behavior. Prediction accuracy of performance models can be improved by grouping the pavement segments into more uniform clusters by including both characteristics and a performance measure. This grouping is not always possible due to limited information. It is impractical to include all the potential significant factors because some of them are potentially unobserved or difficult to measure. Historical performance of pavement segments could be used as a proxy to incorporate the effect of the missing potential significant factors in clustering process. The current state-of-the-art proposes Clusterwise Linear Regression (CLR) to determine the pavement clusters and the associated performance models simultaneously. CLR incorporates the effect of significant factors as well as a performance measure. In this study, a mathematical program was formulated for CLR models including multiple explanatory variables. Pavement data collected recently over the entire state of Nevada were used. International Roughness Index (IRI) was used as a pavement performance measure because it serves as a unified standard that is widely accepted for evaluating pavement performance, especially in terms of riding quality. Results illustrate the advantage of the using CLR. Previous studies have used CLR along with experimental data. This study uses actual field data collected across a variety of environmental, traffic, design, and construction and maintenance conditions.

Keywords: clusterwise regression, pavement management system, performance model, optimization

Procedia PDF Downloads 236
4453 An Analytical Wall Function for 2-D Shock Wave/Turbulent Boundary Layer Interactions

Authors: X. Wang, T. J. Craft, H. Iacovides

Abstract:

When handling the near-wall regions of turbulent flows, it is necessary to account for the viscous effects which are important over the thin near-wall layers. Low-Reynolds- number turbulence models do this by including explicit viscous and also damping terms which become active in the near-wall regions, and using very fine near-wall grids to properly resolve the steep gradients present. In order to overcome the cost associated with the low-Re turbulence models, a more advanced wall function approach has been implemented within OpenFoam and tested together with a standard log-law based wall function in the prediction of flows which involve 2-D shock wave/turbulent boundary layer interactions (SWTBLIs). On the whole, from the calculation of the impinging shock interaction, the three turbulence modelling strategies, the Lauder-Sharma k-ε model with Yap correction (LS), the high-Re k-ε model with standard wall function (SWF) and analytical wall function (AWF), display good predictions of wall-pressure. However, the SWF approach tends to underestimate the tendency of the flow to separate as a result of the SWTBLI. The analytical wall function, on the other hand, is able to reproduce the shock-induced flow separation and returns predictions similar to those of the low-Re model, using a much coarser mesh.

Keywords: SWTBLIs, skin-friction, turbulence modeling, wall function

Procedia PDF Downloads 332
4452 Trinary Affinity—Mathematic Verification and Application (1): Construction of Formulas for the Composite and Prime Numbers

Authors: Liang Ming Zhong, Yu Zhong, Wen Zhong, Fei Fei Yin

Abstract:

Trinary affinity is a description of existence: every object exists as it is known and spoken of, in a system of 2 differences (denoted dif1, dif₂) and 1 similarity (Sim), equivalently expressed as dif₁ / Sim / dif₂ and kn / 0 / tkn (kn = the known, tkn = the 'to be known', 0 = the zero point of knowing). They are mathematically verified and illustrated in this paper by the arrangement of all integers onto 3 columns, where each number exists as a difference in relation to another number as another difference, and the 2 difs as arbitrated by a third number as the Sim, resulting in a trinary affinity or trinity of 3 numbers, of which one is the known, the other the 'to be known', and the third the zero (0) from which both the kn and tkn are measured and specified. Consequently, any number is horizontally specified either as 3n, or as '3n – 1' or '3n + 1', and vertically as 'Cn + c', so that any number seems to occur at the intersection of its X and Y axes and represented by its X and Y coordinates, as any point on Earth’s surface by its latitude and longitude. Technically, i) primes are viewed and treated as progenitors, and composites as descending from them, forming families of composites, each capable of being measured and specified from its own zero called in this paper the realistic zero (denoted 0r, as contrasted to the mathematic zero, 0m), which corresponds to the constant c, and the nature of which separates the composite and prime numbers, and ii) any number is considered as having a magnitude as well as a position, so that a number is verified as a prime first by referring to its descriptive formula and then by making sure that no composite number can possibly occur on its position, by dividing it with factors provided by the composite number formulas. The paper consists of 3 parts: 1) a brief explanation of the trinary affinity of things, 2) the 8 formulas that represent ALL the primes, and 3) families of composite numbers, each represented by a formula. A composite number family is described as 3n + f₁‧f₂. Since there are an infinitely large number of composite number families, to verify the primality of a great probable prime, we have to have it divided with several or many a f₁ from a range of composite number formulas, a procedure that is as laborious as it is the surest way to verifying a great number’s primality. (So, it is possible to substitute planned division for trial division.)

Keywords: trinary affinity, difference, similarity, realistic zero

Procedia PDF Downloads 196
4451 Semiconducting Nanostructures Based Organic Pollutant Degradation Using Natural Sunlight for Water Remediation

Authors: Ankur Gupta, Jayant Raj Saurav, Shantanu Bhattacharya

Abstract:

In this work we report an effective water filtration system based on the photo catalytic performance of semiconducting dense nano-brushes under natural sunlight. During thin-film photocatalysis usually performed by a deposited layer of photocatalyst, a stagnant boundary layer is created near the catalyst which adversely affects the rate of adsorption because of diffusional restrictions. One strategy that may be used is to disrupt this laminar boundary layer by creating a super dense nanostructure near the surface of the catalyst. Further it is adequate to fabricate a structured filter element for a through pass of the water with as grown nanostructures coming out of the surface of such an element. So, the dye remediation is performed through solar means. This remediation was initially limited to lower efficiency because of diffusional restrictions but has now turned around as a fast process owing to the development of the filter materials with standing out dense nanostructures. The effect of increased surface area due to microholes on fraction adsorbed is also investigated and found that there is an optimum value of hole diameter for maximum adsorption.

Keywords: nano materials, photocatalysis, waste water treatment, water remediation

Procedia PDF Downloads 320
4450 Stability Analysis of Three-Dimensional Flow and Heat Transfer over a Permeable Shrinking Surface in a Cu-Water Nanofluid

Authors: Roslinda Nazar, Amin Noor, Khamisah Jafar, Ioan Pop

Abstract:

In this paper, the steady laminar three-dimensional boundary layer flow and heat transfer of a copper (Cu)-water nanofluid in the vicinity of a permeable shrinking flat surface in an otherwise quiescent fluid is studied. The nanofluid mathematical model in which the effect of the nanoparticle volume fraction is taken into account is considered. The governing nonlinear partial differential equations are transformed into a system of nonlinear ordinary differential equations using a similarity transformation which is then solved numerically using the function bvp4c from Matlab. Dual solutions (upper and lower branch solutions) are found for the similarity boundary layer equations for a certain range of the suction parameter. A stability analysis has been performed to show which branch solutions are stable and physically realizable. The numerical results for the skin friction coefficient and the local Nusselt number as well as the velocity and temperature profiles are obtained, presented and discussed in detail for a range of various governing parameters.

Keywords: heat transfer, nanofluid, shrinking surface, stability analysis, three-dimensional flow

Procedia PDF Downloads 267
4449 Bright, Dark N-Soliton Solution of Fokas-Lenells Equation Using Hirota Bilinearization Method

Authors: Sagardeep Talukdar, Riki Dutta, Gautam Kumar Saharia, Sudipta Nandy

Abstract:

In non-linear optics, the Fokas-Lenells equation (FLE) is a well-known integrable equation that describes how ultrashort pulses move across the optical fiber. It admits localized wave solutions, just like any other integrable equation. We apply the Hirota bilinearization method to obtain the soliton solution of FLE. The proposed bilinearization makes use of an auxiliary function. We apply the method to FLE with a vanishing boundary condition, that is, to obtain a bright soliton solution. We have obtained bright 1-soliton and 2-soliton solutions and propose a scheme for obtaining an N-soliton solution. We have used an additional parameter that is responsible for the shift in the position of the soliton. Further analysis of the 2-soliton solution is done by asymptotic analysis. In the non-vanishing boundary condition, we obtain the dark 1-soliton solution. We discover that the suggested bilinearization approach, which makes use of the auxiliary function, greatly simplifies the process while still producing the desired outcome. We think that the current analysis will be helpful in understanding how FLE is used in nonlinear optics and other areas of physics.

Keywords: asymptotic analysis, fokas-lenells equation, hirota bilinearization method, soliton

Procedia PDF Downloads 92
4448 Clinical Use of Opioid Analgesics in China: An Adequacy of Consumption Measure

Authors: Mengjia Zhi, Xingmei Wei, Xiang Gao, Shiyang Liu, Zhiran Huang, Li Yang, Jing Sun

Abstract:

Background: To understand the consumption trend of opioid analgesics and the consumption adequacy of opioid analgesic treatment for moderate to severe pain in China, as well as the pain control level of China with international perspective. Importance: To author’s best knowledge, this is the first study in China to measure the adequacy of opioid analgesic treatment for moderate to severe pain considering disease pattern and with the standardized pain treatment guideline. Methods: A retrospective analysis was carried out to show the consumption frequency (daily defined doses, DDDs) of opioid analgesics and its trend in China from 2006 to 2016. Adequacy of consumption measure (ACM) was used to measure the number of needed morphine equivalents and the overall adequacy of opioid analgesic treatment of moderate to severe pain in China, and compared with international data. Results: The consumption frequency of opioid analgesics (DDDs) in China increased from 13,200,000 DDDs in 2006 to 44,200,000 DDDs in 2016, and showed an increasing trend. The growth rate was faster at first, especially in 2013, then slowed down, decreased slightly in 2015. The ACM of China increased from 0.0032 in 2006 to 0.0074 in 2016, with an overall trend of growth. The ACM level of China has been always a very poor level during 2006-2016. Conclusion: The consumption of opioid analgesics for the treatment of moderate to severe pain in China has always been inadequate. There is a huge gap between China and the international level. There are many reasons behind this problem, which lie in different aspects, including medical staff, patients and the public, health systems and social & cultural aspects. It is necessary to strengthen the training and education of medical staff and the patients, to use mass media to disseminate scientific knowledge of pain management, to encourage communications between doctors and patients, to improve regulatory system for the controlled medicines and the overall health systems, and to balance the regulatory goal for avoidance of abuse, and the social goal of meeting the increasing needs of the people for better life.

Keywords: opioid analgesics, adequate consumption measure, pain control, China

Procedia PDF Downloads 194
4447 The Relationship between the Feeling of Distributive Justice and National Identity of the Youth

Authors: Leila Batmany

Abstract:

This research studies the relationship between the feeling of distributive justice and national identity of the youth. The present analysis intends to experimentally investigate the various dimensions of the justice feeling and its effect on the national identity components. The study has taken justice into consideration from four different points of view on the basis of availability of valuable social sources such as power, wealth, knowledge and status in the political, economic, and cultural and status justice respectively. Furthermore, the national identity has been considered as the feeling of honour, attachment and commitment towards national society and its seven components i.e. history, language, culture, political system, religion, geographical territory and society. The 'field study' has been used as the method for the research with the individual as unit, taking 368 young between the age of 18 and 29 living in Tehran, chosen randomly according to Cochran formula. The individual samples have been randomly chosen among five districts in north, south, west, east, and centre of Tehran, based on the multistage cluster sampling. The data collection has been performed with the use of questionnaire and interview. The most important results are as follows: i) The feeling of economic justice is the weakest one among the youth. ii) The strongest and the weakest dimensions of the national identity are, respectively, the historical and the social dimension. iii) There is a positive and meaningful relationship between the feeling political and statues justice and then national identity, whereas no meaningful relationship exists between the economic and cultural justice and the national identity. iv) There is a positive and meaningful relationship between the feeling of justice in all dimensions and legitimacy of the political system. There is also such a relationship between the legitimacy of the political system and national identity. v) Generally, there is a positive and meaningful relationship between the feeling of distributive justice and national identity among the youth. vi) It is through the legitimacy of the political system that justice feeling can have an influence on the national identity.

Keywords: distributive justice, national identity, legitimacy of political system, Cochran formula, multistage cluster sampling

Procedia PDF Downloads 111