Search results for: surface model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 21879

Search results for: surface model

20079 An Extended Inverse Pareto Distribution, with Applications

Authors: Abdel Hadi Ebraheim

Abstract:

This paper introduces a new extension of the Inverse Pareto distribution in the framework of Marshal-Olkin (1997) family of distributions. This model is capable of modeling various shapes of aging and failure data. The statistical properties of the new model are discussed. Several methods are used to estimate the parameters involved. Explicit expressions are derived for different types of moments of value in reliability analysis are obtained. Besides, the order statistics of samples from the new proposed model have been studied. Finally, the usefulness of the new model for modeling reliability data is illustrated using two real data sets with simulation study.

Keywords: pareto distribution, marshal-Olkin, reliability, hazard functions, moments, estimation

Procedia PDF Downloads 79
20078 Cyclic Etching Process Using Inductively Coupled Plasma for Polycrystalline Diamond on AlGaN/GaN Heterostructure

Authors: Haolun Sun, Ping Wang, Mei Wu, Meng Zhang, Bin Hou, Ling Yang, Xiaohua Ma, Yue Hao

Abstract:

Gallium nitride (GaN) is an attractive material for next-generation power devices. It is noted that the performance of GaN-based high electron mobility transistors (HEMTs) is always limited by the self-heating effect. In response to the problem, integrating devices with polycrystalline diamond (PCD) has been demonstrated to be an efficient way to alleviate the self-heating issue of the GaN-based HEMTs. Among all the heat-spreading schemes, using PCD to cap the epitaxial layer before the HEMTs process is one of the most effective schemes. Now, the mainstream method of fabricating the PCD-capped HEMTs is to deposit the diamond heat-spreading layer on the AlGaN surface, which is covered by a thin nucleation dielectric/passivation layer. To achieve the pattern etching of the diamond heat spreader and device preparation, we selected SiN as the hard mask for diamond etching, which was deposited by plasma-enhanced chemical vapor deposition (PECVD). The conventional diamond etching method first uses F-based etching to remove the SiN from the special window region, followed by using O₂/Ar plasma to etch the diamond. However, the results of the scanning electron microscope (SEM) and focused ion beam microscopy (FIB) show that there are lots of diamond pillars on the etched diamond surface. Through our study, we found that it was caused by the high roughness of the diamond surface and the existence of the overlap between the diamond grains, which makes the etching of the SiN hard mask insufficient and leaves micro-masks on the diamond surface. Thus, a cyclic etching method was proposed to solve the problem of the residual SiN, which was left in the F-based etching. We used F-based etching during the first step to remove the SiN hard mask in the specific region; then, the O₂/Ar plasma was introduced to etch the diamond in the corresponding region. These two etching steps were set as one cycle. After the first cycle, we further used cyclic etching to clear the pillars, in which the F-based etching was used to remove the residual SiN, and then the O₂/Ar plasma was used to etch the diamond. Whether to take the next cyclic etching depends on whether there are still SiN micro-masks left. By using this method, we eventually achieved the self-terminated etching of the diamond and the smooth surface after the etching. These results demonstrate that the cyclic etching method can be successfully applied to the integrated preparation of polycrystalline diamond thin films and GaN HEMTs.

Keywords: AlGaN/GaN heterojunction, O₂/Ar plasma, cyclic etching, polycrystalline diamond

Procedia PDF Downloads 127
20077 Social Media Retailing in the Creator Economy

Authors: Julianne Cai, Weili Xue, Yibin Wu

Abstract:

Social media retailing (SMR) platforms have become popular nowadays. It is characterized by a creative combination of content creation and product selling, which differs from traditional e-tailing (TE) with product selling alone. Motivated by real-world practices like social media platforms “TikTok” and douyin.com, we endeavor to study if the SMR model performs better than the TE model in a monopoly setting. By building a stylized economic model, we find that the SMR model does not always outperform the TE model. Specifically, when the SMR platform collects less commission from the seller than the TE platform, the seller, consumers, and social welfare all benefit more from the SMR model. In contrast, the platform benefits more from the SMR model if and only if the creator’s social influence is high enough or the cost of content creation is small enough. For the incentive structure of the content rewards in the SMR model, we found that a strong incentive mechanism (e.g., the quadratic form) is more powerful than a weak one (e.g., the linear form). The previous one will encourage the creator to choose a much higher quality level of content creation and meanwhile allowing the platform, consumers, and social welfare to become better off. Counterintuitively, providing more generous content rewards is not always helpful for the creator (seller), and it may reduce her profit. Our findings will guide the platform to effectively design incentive mechanisms to boost the content creation and retailing in the SMR model and help the influencers efficiently create content, engage their followers (fans), and price their products sold on the SMR platform.

Keywords: content creation, creator economy, incentive strategy, platform retailing

Procedia PDF Downloads 105
20076 Filler for Higher Bitumen Adhesion

Authors: Alireza Rezagholilou

Abstract:

Moisture susceptibility of bituminous mixes directly affect the stripping of asphalt layers. The majority of relevant test methods are mechanical methods with low repeatability and consistency of results. Thus, this research aims to evaluate the physicochemical interactions of bitumen and aggregates based on the wettability concept. As such, the surface energies of components at the interface are measured by contact angle method. That gives an opportunity to investigate the adhesion properties of multiple mineral fillers at various percentages to explore the best dosage in the mix. Three types of fillers, such as hydrated lime, ground lime and rock powder, are incorporated into the bitumen mix for a series of sessile drop tests for both aggregates and binders. Results show the variation of adhesion properties versus filler (%).

Keywords: adhesion, contact angle, filler, surface energy, moisture susceptibility

Procedia PDF Downloads 71
20075 Moving beyond the Social Model of Disability by Engaging in Anti-Oppressive Social Work Practice

Authors: Irene Carter, Roy Hanes, Judy MacDonald

Abstract:

Considering that disability is universal and people with disabilities are part of all societies; that there is a connection between the disabled individual and the societal; and that it is society and social arrangements that disable people with impairments, contemporary disability discourse emphasizes the social model of disability to counter medical and rehabilitative models of disability. However, the social model does not go far enough in addressing the issues of oppression and inclusion. The authors indicate that the social model does not specifically or adequately denote the oppression of persons with disabilities, which is a central component of progressive social work practice with people with disabilities. The social model of disability does not go far enough in deconstructing disability and offering social workers, as well as people with disabilities a way of moving forward in terms of practice anchored in individual, familial and societal change. The social model of disability is expanded by incorporating principles of anti-oppression social work practice. Although the contextual analysis of the social model of disability is an important component there remains a need for social workers to provide service to individuals and their families, which will be illustrated through anti-oppressive practice (AOP). By applying an anti-oppressive model of practice to the above definitions, the authors not only deconstruct disability paradigms but illustrate how AOP offers a framework for social workers to engage with people with disabilities at the individual, familial and community levels of practice, promoting an emancipatory focus in working with people with disabilities. An anti- social- oppression social work model of disability connects the day-to-day hardships of people with disabilities to the direct consequence of oppression in the form of ableism. AOP theory finds many of its basic concepts within social-oppression theory and the social model of disability. It is often the case that practitioners, including social workers and psychologists, define people with disabilities’ as having or being a problem with the focus placed upon adjustment and coping. A case example will be used to illustrate how an AOP paradigm offers social work a more comprehensive and critical analysis and practice model for social work practice with and for people with disabilities than the traditional medical model, rehabilitative and social model approaches.

Keywords: anti-oppressive practice, disability, people with disabilities, social model of disability

Procedia PDF Downloads 1071
20074 Evolving Software Assessment and Certification Models Using Ant Colony Optimization Algorithm

Authors: Saad M. Darwish

Abstract:

Recently, software quality issues have come to be seen as important subject as we see an enormous growth of agencies involved in software industries. However, these agencies cannot guarantee the quality of their products, thus leaving users in uncertainties. Software certification is the extension of quality by means that quality needs to be measured prior to certification granting process. This research participates in solving the problem of software assessment by proposing a model for assessment and certification of software product that uses a fuzzy inference engine to integrate both of process–driven and application-driven quality assurance strategies. The key idea of the on hand model is to improve the compactness and the interpretability of the model’s fuzzy rules via employing an ant colony optimization algorithm (ACO), which tries to find good rules description by dint of compound rules initially expressed with traditional single rules. The model has been tested by case study and the results have demonstrated feasibility and practicability of the model in a real environment.

Keywords: software quality, quality assurance, software certification model, software assessment

Procedia PDF Downloads 517
20073 Local Image Features Emerging from Brain Inspired Multi-Layer Neural Network

Authors: Hui Wei, Zheng Dong

Abstract:

Object recognition has long been a challenging task in computer vision. Yet the human brain, with the ability to rapidly and accurately recognize visual stimuli, manages this task effortlessly. In the past decades, advances in neuroscience have revealed some neural mechanisms underlying visual processing. In this paper, we present a novel model inspired by the visual pathway in primate brains. This multi-layer neural network model imitates the hierarchical convergent processing mechanism in the visual pathway. We show that local image features generated by this model exhibit robust discrimination and even better generalization ability compared with some existing image descriptors. We also demonstrate the application of this model in an object recognition task on image data sets. The result provides strong support for the potential of this model.

Keywords: biological model, feature extraction, multi-layer neural network, object recognition

Procedia PDF Downloads 539
20072 Opto-Electronic Properties of Novel Structures: Sila-Fulleranes

Authors: Farah Marsusi, Mohammad Qasemnazhand

Abstract:

Density-functional theory (DFT) was applied to investigate the geometry and electronic properties H-terminated Si-fullerene (Si-fullerane). Natural bond orbital (NBO) analysis confirms sp3 hybridization nature of Si-Si bonds in Si-fulleranes. Quantum confinement effect (QCE) does not affect band gap (BG) so strongly in the size between 1 to 1.7 nm. In contrast, the geometry and symmetry of the cage have significant influence on BG. In contrast to their carbon analogues, pentagon rings increase the stability of the cages. Functionalized Si-cages are stable and can be chemically very active. The electronic properties are highly sensitive to the surface chemistry via functionalization with different chemical groups. As a result, BGs and chemical activities of these cages can be drastically tuned through the chemistry of the surface.

Keywords: density functional theory, sila-fullerens, NBO analysis, opto-electronic properties

Procedia PDF Downloads 294
20071 Photocatalytic Active Surface of LWSCC Architectural Concretes

Authors: P. Novosad, L. Osuska, M. Tazky, T. Tazky

Abstract:

Current trends in the building industry are oriented towards the reduction of maintenance costs and the ecological benefits of buildings or building materials. Surface treatment of building materials with photocatalytic active titanium dioxide added into concrete can offer a good solution in this context. Architectural concrete has one disadvantage – dust and fouling keep settling on its surface, diminishing its aesthetic value and increasing maintenance e costs. Concrete surface – silicate material with open porosity – fulfils the conditions of effective photocatalysis, in particular, the self-cleaning properties of surfaces. This modern material is advantageous in particular for direct finishing and architectural concrete applications. If photoactive titanium dioxide is part of the top layers of road concrete on busy roads and the facades of the buildings surrounding these roads, exhaust fumes can be degraded with the aid of sunshine; hence, environmental load will decrease. It is clear that options for removing pollutants like nitrogen oxides (NOx) must be found. Not only do these gases present a health risk, they also cause the degradation of the surfaces of concrete structures. The photocatalytic properties of titanium dioxide can in the long term contribute to the enhanced appearance of surface layers and eliminate harmful pollutants dispersed in the air, and facilitate the conversion of pollutants into less toxic forms (e.g., NOx to HNO3). This paper describes verification of the photocatalytic properties of titanium dioxide and presents the results of mechanical and physical tests on samples of architectural lightweight self-compacting concretes (LWSCC). The very essence of the use of LWSCC is their rheological ability to seep into otherwise extremely hard accessible or inaccessible construction areas, or sections thereof where concrete compacting will be a problem, or where vibration is completely excluded. They are also able to create a solid monolithic element with a large variety of shapes; the concrete will at the same meet the requirements of both chemical aggression and the influences of the surrounding environment. Due to their viscosity, LWSCCs are able to imprint the formwork elements into their structure and thus create high quality lightweight architectural concretes.

Keywords: photocatalytic concretes, titanium dioxide, architectural concretes, Lightweight Self-Compacting Concretes (LWSCC)

Procedia PDF Downloads 292
20070 Carbon Nanocomposites : Structure, Characterization and Environmental Application

Authors: Bensacia Nabila, Hadj-Ziane Amel, Sefah Karima

Abstract:

Carbon nanocomposites have received more attention in the last years in view of their special properties such as low density, high specific surface area, and thermal and mechanical stability. Taking into account the importance of these materials, many studies aimed at improving the synthesis process have been conducted. However, the presence of impurities could affect significantly the properties of these materials, and the characterization of these compounds is an important challenge to assure the quality of the new carbon nanocomposites. The present study aims to develop a new recyclable decontaminating material for dyes removal. This new material consists of an active element based on carbon nanotubes wrapped in a microcapsule of iron oxide. The adsorbent is characterized by Transmission electron microscopy, X-ray diffraction and the surface area was measured by the BET method.

Keywords: carbon nanocomposite, chitozen, elimination, dyes

Procedia PDF Downloads 319
20069 Simulation of Optimal Runoff Hydrograph Using Ensemble of Radar Rainfall and Blending of Runoffs Model

Authors: Myungjin Lee, Daegun Han, Jongsung Kim, Soojun Kim, Hung Soo Kim

Abstract:

Recently, the localized heavy rainfall and typhoons are frequently occurred due to the climate change and the damage is becoming bigger. Therefore, we may need a more accurate prediction of the rainfall and runoff. However, the gauge rainfall has the limited accuracy in space. Radar rainfall is better than gauge rainfall for the explanation of the spatial variability of rainfall but it is mostly underestimated with the uncertainty involved. Therefore, the ensemble of radar rainfall was simulated using error structure to overcome the uncertainty and gauge rainfall. The simulated ensemble was used as the input data of the rainfall-runoff models for obtaining the ensemble of runoff hydrographs. The previous studies discussed about the accuracy of the rainfall-runoff model. Even if the same input data such as rainfall is used for the runoff analysis using the models in the same basin, the models can have different results because of the uncertainty involved in the models. Therefore, we used two models of the SSARR model which is the lumped model, and the Vflo model which is a distributed model and tried to simulate the optimum runoff considering the uncertainty of each rainfall-runoff model. The study basin is located in Han river basin and we obtained one integrated runoff hydrograph which is an optimum runoff hydrograph using the blending methods such as Multi-Model Super Ensemble (MMSE), Simple Model Average (SMA), Mean Square Error (MSE). From this study, we could confirm the accuracy of rainfall and rainfall-runoff model using ensemble scenario and various rainfall-runoff model and we can use this result to study flood control measure due to climate change. Acknowledgements: This work is supported by the Korea Agency for Infrastructure Technology Advancement(KAIA) grant funded by the Ministry of Land, Infrastructure and Transport (Grant 18AWMP-B083066-05).

Keywords: radar rainfall ensemble, rainfall-runoff models, blending method, optimum runoff hydrograph

Procedia PDF Downloads 272
20068 Modeling of Coagulation Process for the Removal of Carbofuran in Aqueous Solution

Authors: Roli Saini, Pradeep Kumar

Abstract:

A coagulation/flocculation process was adopted for the reduction of carbamate insecticide (carbofuran) from aqueous solution. Ferric chloride (FeCl3) was used as a coagulant to treat the carbofuran. To exploit the reduction efficiency of pesticide concentration and COD, the jar-test experiments were carried out and process was optimized through response surface methodology (RSM). The effects of two independent factors; i.e., FeCl3 dosage and pH on the reduction efficiency were estimated by using central composite design (CCD). The initial COD of the 30 mg/L concentrated solution was found to be 510 mg/L. Results exposed that the maximum reduction occurred at an optimal condition of FeCl3 = 80 mg/L, and pH = 5.0, from which the reduction of concentration and COD 75.13% and 65.34%, respectively. The present study also predicted that the obtained regression equations could be helpful as the theoretical basis for the coagulation process of pesticide wastewater.

Keywords: carbofuran, coagulation, optimization, response surface methodology

Procedia PDF Downloads 320
20067 Design and Numerical Study on Aerodynamics Performance for F16 Leading Edge Extension

Authors: San-Yih Lin, Hsien-Hao Teng

Abstract:

In this research, we use commercial software, ANSYS CFX, to carry on the simulation the F16 aerodynamics performance flow field. The flight with a modified Leading Edge Extension (LEX) is proposed to increase the lift/drag ratio. The Shear Stress Transport turbulent model is used. The unstructured grid system is generated by the ICEM CFD. The prism grid around the wall surface is generated to simulate boundary layer viscosity flow field and Tetrahedron Mesh is used for the other computation domain. The lift, drag, and pitch moment are computed. The strong vortex structures upper the wing and vortex bursts under different sweep angle of LEX are investigated.

Keywords: LEX, lift/drag ratio, pitch moment, vortex burst

Procedia PDF Downloads 320
20066 Application Difference between Cox and Logistic Regression Models

Authors: Idrissa Kayijuka

Abstract:

The logistic regression and Cox regression models (proportional hazard model) at present are being employed in the analysis of prospective epidemiologic research looking into risk factors in their application on chronic diseases. However, a theoretical relationship between the two models has been studied. By definition, Cox regression model also called Cox proportional hazard model is a procedure that is used in modeling data regarding time leading up to an event where censored cases exist. Whereas the Logistic regression model is mostly applicable in cases where the independent variables consist of numerical as well as nominal values while the resultant variable is binary (dichotomous). Arguments and findings of many researchers focused on the overview of Cox and Logistic regression models and their different applications in different areas. In this work, the analysis is done on secondary data whose source is SPSS exercise data on BREAST CANCER with a sample size of 1121 women where the main objective is to show the application difference between Cox regression model and logistic regression model based on factors that cause women to die due to breast cancer. Thus we did some analysis manually i.e. on lymph nodes status, and SPSS software helped to analyze the mentioned data. This study found out that there is an application difference between Cox and Logistic regression models which is Cox regression model is used if one wishes to analyze data which also include the follow-up time whereas Logistic regression model analyzes data without follow-up-time. Also, they have measurements of association which is different: hazard ratio and odds ratio for Cox and logistic regression models respectively. A similarity between the two models is that they are both applicable in the prediction of the upshot of a categorical variable i.e. a variable that can accommodate only a restricted number of categories. In conclusion, Cox regression model differs from logistic regression by assessing a rate instead of proportion. The two models can be applied in many other researches since they are suitable methods for analyzing data but the more recommended is the Cox, regression model.

Keywords: logistic regression model, Cox regression model, survival analysis, hazard ratio

Procedia PDF Downloads 447
20065 Comparison of Wake Oscillator Models to Predict Vortex-Induced Vibration of Tall Chimneys

Authors: Saba Rahman, Arvind K. Jain, S. D. Bharti, T. K. Datta

Abstract:

The present study compares the semi-empirical wake-oscillator models that are used to predict vortex-induced vibration of structures. These models include those proposed by Facchinetti, Farshidian, and Dolatabadi, and Skop and Griffin. These models combine a wake oscillator model resembling the Van der Pol oscillator model and a single degree of freedom oscillation model. In order to use these models for estimating the top displacement of chimneys, the first mode vibration of the chimneys is only considered. The modal equation of the chimney constitutes the single degree of freedom model (SDOF). The equations of the wake oscillator model and the SDOF are simultaneously solved using an iterative procedure. The empirical parameters used in the wake-oscillator models are estimated using a newly developed approach, and response is compared with experimental data, which appeared comparable. For carrying out the iterative solution, the ode solver of MATLAB is used. To carry out the comparative study, a tall concrete chimney of height 210m has been chosen with the base diameter as 28m, top diameter as 20m, and thickness as 0.3m. The responses of the chimney are also determined using the linear model proposed by E. Simiu and the deterministic model given in Eurocode. It is observed from the comparative study that the responses predicted by the Facchinetti model and the model proposed by Skop and Griffin are nearly the same, while the model proposed by Fashidian and Dolatabadi predicts a higher response. The linear model without considering the aero-elastic phenomenon provides a less response as compared to the non-linear models. Further, for large damping, the prediction of the response by the Euro code is relatively well compared to those of non-linear models.

Keywords: chimney, deterministic model, van der pol, vortex-induced vibration

Procedia PDF Downloads 216
20064 Sensitivity Enhancement in Graphene Based Surface Plasmon Resonance (SPR) Biosensor

Authors: Angad S. Kushwaha, Rajeev Kumar, Monika Srivastava, S. K. Srivastava

Abstract:

A lot of research work is going on in the field of graphene based SPR biosensor. In the conventional SPR based biosensor, graphene is used as a biomolecular recognition element. Graphene adsorbs biomolecules due to carbon based ring structure through sp2 hybridization. The proposed SPR based biosensor configuration will open a new avenue for efficient biosensing by taking the advantage of Graphene and its fascinating nanofabrication properties. In the present study, we have studied an SPR biosensor based on graphene mediated by Zinc Oxide (ZnO) and Gold. In the proposed structure, prism (BK7) base is coated with Zinc Oxide followed by Gold and Graphene. Using the waveguide approach by transfer matrix method, the proposed structure has been investigated theoretically. We have analyzed the reflectance versus incidence angle curve using He-Ne laser of wavelength 632.8 nm. Angle, at which the reflectance is minimized, termed as SPR angle. The shift in SPR angle is responsible for biosensing. From the analysis of reflectivity curve, we have found that there is a shift in SPR angle as the biomolecules get attached on the graphene surface. This graphene layer also enhances the sensitivity of the SPR sensor as compare to the conventional sensor. The sensitivity also increases by increasing the no of graphene layer. So in our proposed biosensor we have found minimum possible reflectivity with optimum level of sensitivity.

Keywords: biosensor, sensitivity, surface plasmon resonance, transfer matrix method

Procedia PDF Downloads 414
20063 Earthquake Relocations and Constraints on the Lateral Velocity Variations along the Gulf of Suez, Using the Modified Joint Hypocenter Method Determination

Authors: Abu Bakr Ahmed Shater

Abstract:

Hypocenters of 250 earthquakes recorded by more than 5 stations from the Egyptian seismic network around the Gulf of Suez were relocated and the seismic stations correction for the P-wave is estimated, using the modified joint hypocenter method determination. Five stations TR1, SHR, GRB, ZAF and ZET have minus signs in the station P-wave travel time corrections and their values are -0.235, -0.366, -0.288, -0.366 and -0.058, respectively. It is possible to assume that, the underground model in this area has a particular characteristic of high velocity structure in which the other stations TR2, RDS, SUZ, HRG and ZNM have positive signs and their values are 0.024, 0.187, 0.314, 0.645 and 0.145, respectively. It is possible to assume that, the underground model in this area has particular characteristic of low velocity structure. The hypocenteral location determined by the Modified joint hypocenter method is more precise than those determined by the other routine work program. This method simultaneously solves the earthquake locations and station corrections. The station corrections reflect, not only the different crustal conditions in the vicinity of the stations, but also the difference between the actual and modeled seismic velocities along each of the earthquake - station ray paths. The stations correction obtained is correlated with the major surface geological features in the study area. As a result of the relocation, the low velocity area appears in the northeastern and southwestern sides of the Gulf of Suez, while the southeastern and northwestern parts are of high velocity area.

Keywords: gulf of Suez, seismicity, relocation of hypocenter, joint hypocenter determination

Procedia PDF Downloads 354
20062 Investigation of Acidizing Corrosion Inhibitors for Mild Steel in Hydrochloric Acid: Theoretical and Experimental Approaches

Authors: Ambrish Singh

Abstract:

The corrosion inhibition performance of pyran derivatives (AP) on mild steel in 15% HCl was investigated by electrochemical impedance spectroscopy (EIS), potentiodynamic polarization, weight loss, contact angle, and scanning electron microscopy (SEM) measurements, DFT and molecular dynamic simulation. The adsorption of APs on the surface of mild steel obeyed Langmuir isotherm. The potentiodynamic polarization study confirmed that inhibitors are mixed type with cathodic predominance. Molecular dynamic simulation was applied to search for the most stable configuration and adsorption energies for the interaction of the inhibitors with Fe (110) surface. The theoretical data obtained are, in most cases, in agreement with experimental results.

Keywords: acidizing inhibitor, pyran derivatives, DFT, molecular simulation, mild steel, EIS

Procedia PDF Downloads 191
20061 Effect of TERGITOL NP-9 and PEG-10 Oleyl Phosphate as Surfactant and Corrosion Inhibitor on Tribo-Corrosion Performance of Carbon Steel in Emulsion-Based Drilling Fluids

Authors: Mohammadjavad Palimi, D. Y. Li, E. Kuru

Abstract:

Emulsion-based drilling fluids containing mineral oil are commonly used for drilling operations, which generate a lubricating film to prevent direct contact between moving metal parts, thus reducing friction, wear, and corrosion. For long-lasting lubrication, the thin lubricating film formed on the metal surface should possess good anti-wear and anti-corrosion capabilities. This study aims to investigate the effects of two additives, TERGITOL NP-9 and PEG-10 oleyl phosphate, acting as surfactant and corrosion inhibitor, respectively, on the tribo-corrosion behavior of 1018 carbon steel immersed in 5% KCl solution at room temperature. A pin-on-disc tribometer attached to an electrochemical system was used to investigate the corrosive wear of the steel immersed in emulsion-based fluids containing the surfactant and corrosion inhibitor. The wear track, surface chemistry and composition of the protective film formed on the steel surface were analyzed with an optical profilometer, SEM, and SEM-EDX. Results of the study demonstrate that the performance of the emulsion-based drilling fluids was significantly improved by the corrosion inhibitor by a remarkable reduction in corrosion, coefficient of friction (COF) and wear.

Keywords: corrosion inhibitor, emulsion-based drilling fluid, tribo-corrosion, friction, wear

Procedia PDF Downloads 66
20060 Biocompatibility and Electrochemical Assessment of Biomedical Ti-24Nb-4Zr-8Sn Produced by Spark Plasma Sintering

Authors: Jerman Madonsela, Wallace Matizamhuka, Akiko Yamamoto, Ronald Machaka, Brendon Shongwe

Abstract:

In this study, biocompatibility evaluation of nanostructured near beta Ti-24Nb-4Zr-8Sn (Ti2448) alloy with non-toxic elements produced utilizing Spark plasma sintering (SPS) of very fine microsized powders attained through mechanical alloying was performed. The results were compared with pure titanium and Ti-6Al-4V (Ti64) alloy. Cell proliferation test was performed using murine osteoblastic cells, MC3T3-E1 at two cell densities; 400 and 4000 cells/mL for 7 days incubation. Pure titanium took a lead under both conditions suggesting that the presence of other oxide layers influence cell proliferation. No significant difference in cell proliferation was observed between Ti64 and Ti2448. Potentiodynamic measurement in Hanks, 0.9% NaCl and cell culture medium showed no distinct difference on the anodic polarization curves of the three alloys, indicating that the same anodic reaction occurred on their surface but with different rates. However, Ti2448 showed better corrosion resistance in cell culture medium with a slightly lower corrosion rate of 2.96 nA/cm2 compared to 4.86 nA/cm2 and 5.62 nA/cm2 of Ti and Ti64 respectively. Ti2448 adsorbed less protein as compared to Ti and Ti64 though no notable difference in surface wettability was observed.

Keywords: biocompatibility, osteoblast, corrosion, surface wettability, protein adsorption

Procedia PDF Downloads 217
20059 On Differential Growth Equation to Stochastic Growth Model Using Hyperbolic Sine Function in Height/Diameter Modeling of Pines

Authors: S. O. Oyamakin, A. U. Chukwu

Abstract:

Richard's growth equation being a generalized logistic growth equation was improved upon by introducing an allometric parameter using the hyperbolic sine function. The integral solution to this was called hyperbolic Richard's growth model having transformed the solution from deterministic to a stochastic growth model. Its ability in model prediction was compared with the classical Richard's growth model an approach which mimicked the natural variability of heights/diameter increment with respect to age and therefore provides a more realistic height/diameter predictions using the coefficient of determination (R2), Mean Absolute Error (MAE) and Mean Square Error (MSE) results. The Kolmogorov-Smirnov test and Shapiro-Wilk test was also used to test the behavior of the error term for possible violations. The mean function of top height/Dbh over age using the two models under study predicted closely the observed values of top height/Dbh in the hyperbolic Richard's nonlinear growth models better than the classical Richard's growth model.

Keywords: height, Dbh, forest, Pinus caribaea, hyperbolic, Richard's, stochastic

Procedia PDF Downloads 474
20058 Development of a Predictive Model to Prevent Financial Crisis

Authors: Tengqin Han

Abstract:

Delinquency has been a crucial factor in economics throughout the years. Commonly seen in credit card and mortgage, it played one of the crucial roles in causing the most recent financial crisis in 2008. In each case, a delinquency is a sign of the loaner being unable to pay off the debt, and thus may cause a lost of property in the end. Individually, one case of delinquency seems unimportant compared to the entire credit system. China, as an emerging economic entity, the national strength and economic strength has grown rapidly, and the gross domestic product (GDP) growth rate has remained as high as 8% in the past decades. However, potential risks exist behind the appearance of prosperity. Among the risks, the credit system is the most significant one. Due to long term and a large amount of balance of the mortgage, it is critical to monitor the risk during the performance period. In this project, about 300,000 mortgage account data are analyzed in order to develop a predictive model to predict the probability of delinquency. Through univariate analysis, the data is cleaned up, and through bivariate analysis, the variables with strong predictive power are detected. The project is divided into two parts. In the first part, the analysis data of 2005 are split into 2 parts, 60% for model development, and 40% for in-time model validation. The KS of model development is 31, and the KS for in-time validation is 31, indicating the model is stable. In addition, the model is further validation by out-of-time validation, which uses 40% of 2006 data, and KS is 33. This indicates the model is still stable and robust. In the second part, the model is improved by the addition of macroeconomic economic indexes, including GDP, consumer price index, unemployment rate, inflation rate, etc. The data of 2005 to 2010 is used for model development and validation. Compared with the base model (without microeconomic variables), KS is increased from 41 to 44, indicating that the macroeconomic variables can be used to improve the separation power of the model, and make the prediction more accurate.

Keywords: delinquency, mortgage, model development, model validation

Procedia PDF Downloads 220
20057 Mapping of Urban Micro-Climate in Lyon (France) by Integrating Complementary Predictors at Different Scales into Multiple Linear Regression Models

Authors: Lucille Alonso, Florent Renard

Abstract:

The characterizations of urban heat island (UHI) and their interactions with climate change and urban climates are the main research and public health issue, due to the increasing urbanization of the population. These solutions require a better knowledge of the UHI and micro-climate in urban areas, by combining measurements and modelling. This study is part of this topic by evaluating microclimatic conditions in dense urban areas in the Lyon Metropolitan Area (France) using a combination of data traditionally used such as topography, but also from LiDAR (Light Detection And Ranging) data, Landsat 8 satellite observation and Sentinel and ground measurements by bike. These bicycle-dependent weather data collections are used to build the database of the variable to be modelled, the air temperature, over Lyon’s hyper-center. This study aims to model the air temperature, measured during 6 mobile campaigns in Lyon in clear weather, using multiple linear regressions based on 33 explanatory variables. They are of various categories such as meteorological parameters from remote sensing, topographic variables, vegetation indices, the presence of water, humidity, bare soil, buildings, radiation, urban morphology or proximity and density to various land uses (water surfaces, vegetation, bare soil, etc.). The acquisition sources are multiple and come from the Landsat 8 and Sentinel satellites, LiDAR points, and cartographic products downloaded from an open data platform in Greater Lyon. Regarding the presence of low, medium, and high vegetation, the presence of buildings and ground, several buffers close to these factors were tested (5, 10, 20, 25, 50, 100, 200 and 500m). The buffers with the best linear correlations with air temperature for ground are 5m around the measurement points, for low and medium vegetation, and for building 50m and for high vegetation is 100m. The explanatory model of the dependent variable is obtained by multiple linear regression of the remaining explanatory variables (Pearson correlation matrix with a |r| < 0.7 and VIF with < 5) by integrating a stepwise sorting algorithm. Moreover, holdout cross-validation is performed, due to its ability to detect over-fitting of multiple regression, although multiple regression provides internal validation and randomization (80% training, 20% testing). Multiple linear regression explained, on average, 72% of the variance for the study days, with an average RMSE of only 0.20°C. The impact on the model of surface temperature in the estimation of air temperature is the most important variable. Other variables are recurrent such as distance to subway stations, distance to water areas, NDVI, digital elevation model, sky view factor, average vegetation density, or building density. Changing urban morphology influences the city's thermal patterns. The thermal atmosphere in dense urban areas can only be analysed on a microscale to be able to consider the local impact of trees, streets, and buildings. There is currently no network of fixed weather stations sufficiently deployed in central Lyon and most major urban areas. Therefore, it is necessary to use mobile measurements, followed by modelling to characterize the city's multiple thermal environments.

Keywords: air temperature, LIDAR, multiple linear regression, surface temperature, urban heat island

Procedia PDF Downloads 133
20056 Optimization of Samarium Extraction via Nanofluid-Based Emulsion Liquid Membrane Using Cyanex 272 as Mobile Carrier

Authors: Maliheh Raji, Hossein Abolghasemi, Jaber Safdari, Ali Kargari

Abstract:

Samarium as a rare-earth element is playing a growing important role in high technology. Traditional methods for extraction of rare earth metals such as ion exchange and solvent extraction have disadvantages of high investment and high energy consumption. Emulsion liquid membrane (ELM) as an improved solvent extraction technique is an effective transport method for separation of various compounds from aqueous solutions. In this work, the extraction of samarium from aqueous solutions by ELM was investigated using response surface methodology (RSM). The organic membrane phase of the ELM was a nanofluid consisted of multiwalled carbon nanotubes (MWCNT), Span80 as surfactant, Cyanex 272 as mobile carrier, and kerosene as base fluid. 1 M nitric acid solution was used as internal aqueous phase. The effects of the important process parameters on samarium extraction were investigated, and the values of these parameters were optimized using the Central Composition Design (CCD) of RSM. These parameters were the concentration of MWCNT in nanofluid, the carrier concentration, and the volume ratio of organic membrane phase to internal phase (Roi). The three-dimensional (3D) response surfaces of samarium extraction efficiency were obtained to visualize the individual and interactive effects of the process variables. A regression model for % extraction was developed, and its adequacy was evaluated. The result shows that % extraction improves by using MWCNT nanofluid in organic membrane phase and extraction efficiency of 98.92% can be achieved under the optimum conditions. In addition, demulsification was successfully performed and the recycled membrane phase was proved to be effective in the optimum condition.

Keywords: Cyanex 272, emulsion liquid membrane, MWCNT nanofluid, response surface methology, Samarium

Procedia PDF Downloads 418
20055 Potential Effects of Climate Change on Streamflow, Based on the Occurrence of Severe Floods in Kelantan, East Coasts of Peninsular Malaysia River Basin

Authors: Muhd. Barzani Gasim, Mohd. Ekhwan Toriman, Mohd. Khairul Amri Kamarudin, Azman Azid, Siti Humaira Haron, Muhammad Hafiz Md. Saad

Abstract:

Malaysia is a country in Southeast Asia that constantly exposed to flooding and landslide. The disaster has caused some troubles such loss of property, loss of life and discomfort of people involved. This problem occurs as a result of climate change leading to increased stream flow rate as a result of disruption to regional hydrological cycles. The aim of the study is to determine hydrologic processes in the east coasts of Peninsular Malaysia, especially in Kelantan Basin. Parameterized to account for the spatial and temporal variability of basin characteristics and their responses to climate variability. For hydrological modeling of the basin, the Soil and Water Assessment Tool (SWAT) model such as relief, soil type, and its use, and historical daily time series of climate and river flow rates are studied. The interpretation of Landsat map/land uses will be applied in this study. The combined of SWAT and climate models, the system will be predicted an increase in future scenario climate precipitation, increase in surface runoff, increase in recharge and increase in the total water yield. As a result, this model has successfully developed the basin analysis by demonstrating analyzing hydrographs visually, good estimates of minimum and maximum flows and severe floods observed during calibration and validation periods.

Keywords: east coasts of Peninsular Malaysia, Kelantan river basin, minimum and maximum flows, severe floods, SWAT model

Procedia PDF Downloads 257
20054 Proactive WPA/WPA2 Security Using DD-WRT Firmware

Authors: Mustafa Kamoona, Mohamed El-Sharkawy

Abstract:

Although the latest Wireless Local Area Network technology Wi-Fi 802.11i standard addresses many of the security weaknesses of the antecedent Wired Equivalent Privacy (WEP) protocol, there are still scenarios where the network security are still vulnerable. The first security model that 802.11i offers is the Personal model which is very cheap and simple to install and maintain, yet it uses a Pre Shared Key (PSK) and thus has a low to medium security level. The second model that 802.11i provide is the Enterprise model which is highly secured but much more expensive and difficult to install/maintain and requires the installation and maintenance of an authentication server that will handle the authentication and key management for the wireless network. A central issue with the personal model is that the PSK needs to be shared with all the devices that are connected to the specific Wi-Fi network. This pre-shared key, unless changed regularly, can be cracked using offline dictionary attacks within a matter of hours. The key is burdensome to change in all the connected devices manually unless there is some kind of algorithm that coordinate this PSK update. The key idea of this paper is to propose a new algorithm that proactively and effectively coordinates the pre-shared key generation, management, and distribution in the cheap WPA/WPA2 personal security model using only a DD-WRT router.

Keywords: Wi-Fi, WPS, TLS, DD-WRT

Procedia PDF Downloads 227
20053 Forecasting Age-Specific Mortality Rates and Life Expectancy at Births for Malaysian Sub-Populations

Authors: Syazreen N. Shair, Saiful A. Ishak, Aida Y. Yusof, Azizah Murad

Abstract:

In this paper, we forecast age-specific Malaysian mortality rates and life expectancy at births by gender and ethnic groups including Malay, Chinese and Indian. Two mortality forecasting models are adopted the original Lee-Carter model and its recent modified version, the product ratio coherent model. While the first forecasts the mortality rates for each subpopulation independently, the latter accounts for the relationship between sub-populations. The evaluation of both models is performed using the out-of-sample forecast errors which are mean absolute percentage errors (MAPE) for mortality rates and mean forecast errors (MFE) for life expectancy at births. The best model is then used to perform the long-term forecasts up to the year 2030, the year when Malaysia is expected to become an aged nation. Results suggest that in terms of overall accuracy, the product ratio model performs better than the original Lee-Carter model. The association of lower mortality group (Chinese) in the subpopulation model can improve the forecasts of high mortality groups (Malay and Indian).

Keywords: coherent forecasts, life expectancy at births, Lee-Carter model, product-ratio model, mortality rates

Procedia PDF Downloads 215
20052 Study and Simulation of a Sever Dust Storm over West and South West of Iran

Authors: Saeed Farhadypour, Majid Azadi, Habibolla Sayyari, Mahmood Mosavi, Shahram Irani, Aliakbar Bidokhti, Omid Alizadeh Choobari, Ziba Hamidi

Abstract:

In the recent decades, frequencies of dust events have increased significantly in west and south west of Iran. First, a survey on the dust events during the period (1990-2013) is investigated using historical dust data collected at 6 weather stations scattered over west and south-west of Iran. After statistical analysis of the observational data, one of the most severe dust storm event that occurred in the region from 3rd to 6th July 2009, is selected and analyzed. WRF-Chem model is used to simulate the amount of PM10 and how to transport it to the areas. The initial and lateral boundary conditions for model obtained from GFS data with 0.5°×0.5° spatial resolution. In the simulation, two aerosol schemas (GOCART and MADE/SORGAM) with 3 options (chem_opt=106,300 and 303) were evaluated. Results of the statistical analysis of the historical data showed that south west of Iran has high frequency of dust events, so that Bushehr station has the highest frequency between stations and Urmia station has the lowest frequency. Also in the period of 1990 to 2013, the years 2009 and 1998 with the amounts of 3221 and 100 respectively had the highest and lowest dust events and according to the monthly variation, June and July had the highest frequency of dust events and December had the lowest frequency. Besides, model results showed that the MADE / SORGAM scheme has predicted values and trends of PM10 better than the other schemes and has showed the better performance in comparison with the observations. Finally, distribution of PM10 and the wind surface maps obtained from numerical modeling showed that the formation of dust plums formed in Iraq and Syria and also transportation of them to the West and Southwest of Iran. In addition, comparing the MODIS satellite image acquired on 4th July 2009 with model output at the same time showed the good ability of WRF-Chem in simulating spatial distribution of dust.

Keywords: dust storm, MADE/SORGAM scheme, PM10, WRF-Chem

Procedia PDF Downloads 268
20051 Characterization of Atmospheric Aerosols by Developing a Cascade Impactor

Authors: Sapan Bhatnagar

Abstract:

Micron size particles emitted from different sources and produced by combustion have serious negative effects on human health and environment. They can penetrate deep into our lungs through the respiratory system. Determination of the amount of particulates present in the atmosphere per cubic meter is necessary to monitor, regulate and model atmospheric particulate levels. Cascade impactor is used to collect the atmospheric particulates and by gravimetric analysis, their concentration in the atmosphere of different size ranges can be determined. Cascade impactors have been used for the classification of particles by aerodynamic size. They operate on the principle of inertial impaction. It consists of a number of stages each having an impaction plate and a nozzle. Collection plates are connected in series with smaller and smaller cutoff diameter. Air stream passes through the nozzle and the plates. Particles in the stream having large enough inertia impact upon the plate and smaller particles pass onto the next stage. By designing each successive stage with higher air stream velocity in the nozzle, smaller diameter particles will be collected at each stage. Particles too small to be impacted on the last collection plate will be collected on a backup filter. Impactor consists of 4 stages each made of steel, having its cut-off diameters less than 10 microns. Each stage is having collection plates, soaked with oil to prevent bounce and allows the impactor to function at high mass concentrations. Even after the plate is coated with particles, the incoming particle will still have a wet surface which significantly reduces particle bounce. The particles that are too small to be impacted on the last collection plate are then collected on a backup filter (microglass fiber filter), fibers provide larger surface area to which particles may adhere and voids in filter media aid in reducing particle re-entrainment.

Keywords: aerodynamic diameter, cascade, environment, particulates, re-entrainment

Procedia PDF Downloads 317
20050 Efficient Sampling of Probabilistic Program for Biological Systems

Authors: Keerthi S. Shetty, Annappa Basava

Abstract:

In recent years, modelling of biological systems represented by biochemical reactions has become increasingly important in Systems Biology. Biological systems represented by biochemical reactions are highly stochastic in nature. Probabilistic model is often used to describe such systems. One of the main challenges in Systems biology is to combine absolute experimental data into probabilistic model. This challenge arises because (1) some molecules may be present in relatively small quantities, (2) there is a switching between individual elements present in the system, and (3) the process is inherently stochastic on the level at which observations are made. In this paper, we describe a novel idea of combining absolute experimental data into probabilistic model using tool R2. Through a case study of the Transcription Process in Prokaryotes we explain how biological systems can be written as probabilistic program to combine experimental data into the model. The model developed is then analysed in terms of intrinsic noise and exact sampling of switching times between individual elements in the system. We have mainly concentrated on inferring number of genes in ON and OFF states from experimental data.

Keywords: systems biology, probabilistic model, inference, biology, model

Procedia PDF Downloads 343