Search results for: testing simulation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7784

Search results for: testing simulation

1694 Optimal Dynamic Regime for CO Oxidation Reaction Discovered by Policy-Gradient Reinforcement Learning Algorithm

Authors: Lifar M. S., Tereshchenko A. A., Bulgakov A. N., Guda S. A., Guda A. A., Soldatov A. V.

Abstract:

Metal nanoparticles are widely used as heterogeneous catalysts to activate adsorbed molecules and reduce the energy barrier of the reaction. Reaction product yield depends on the interplay between elementary processes - adsorption, activation, reaction, and desorption. These processes, in turn, depend on the inlet feed concentrations, temperature, and pressure. At stationary conditions, the active surface sites may be poisoned by reaction byproducts or blocked by thermodynamically adsorbed gaseous reagents. Thus, the yield of reaction products can significantly drop. On the contrary, the dynamic control accounts for the changes in the surface properties and adjusts reaction parameters accordingly. Therefore dynamic control may be more efficient than stationary control. In this work, a reinforcement learning algorithm has been applied to control the simulation of CO oxidation on a catalyst. The policy gradient algorithm is learned to maximize the CO₂ production rate based on the CO and O₂ flows at a given time step. Nonstationary solutions were found for the regime with surface deactivation. The maximal product yield was achieved for periodic variations of the gas flows, ensuring a balance between available adsorption sites and the concentration of activated intermediates. This methodology opens a perspective for the optimization of catalytic reactions under nonstationary conditions.

Keywords: artificial intelligence, catalyst, co oxidation, reinforcement learning, dynamic control

Procedia PDF Downloads 130
1693 2 Stage CMOS Regulated Cascode Distributed Amplifier Design Based On Inductive Coupling Technique in Submicron CMOS Process

Authors: Kittipong Tripetch, Nobuhiko Nakano

Abstract:

This paper proposes one stage and two stage CMOS Complementary Regulated Cascode Distributed Amplifier (CRCDA) design based on Inductive and Transformer coupling techniques. Usually, Distributed amplifier is based on inductor coupling between gate and gate of MOSFET and between drain and drain of MOSFET. But this paper propose some new idea, by coupling with differential primary windings of transformer between gate and gate of MOSFET first stage and second stage of regulated cascade amplifier and by coupling with differential secondary windings transformer of MOSFET between drain and drain of MOSFET first stage and second stage of regulated cascade amplifier. This paper also proposes polynomial modeling of Silicon Transformer passive equivalent circuit from Nanyang Technological University which is used to extract frequency response of transformer. Cadence simulation results are used to verify validity of transformer polynomial modeling which can be used to design distributed amplifier without Cadence. 4 parameters of scattering matrix of 2 port of the propose circuit is derived as a function of 4 parameters of impedance matrix.

Keywords: CMOS regulated cascode distributed amplifier, silicon transformer modeling with polynomial, low power consumption, distribute amplification technique

Procedia PDF Downloads 512
1692 Development and Validation of Thermal Stability in Complex System ABDM has two ASIC by NISA and COMSOL Tools

Authors: A. Oukaira, A. Lakhssassi, O. Ettahri

Abstract:

To make a good thermal management in an ABDM (Adapter Board Detector Module) card, we must first control temperature and its gradient from the first step in the design of integrated circuits ASIC of our complex system. In this paper, our main goal is to develop and validate the thermal stability in order to get an idea of the flow of heat around the ASIC in transient and thus address the thermal issues for integrated circuits at the ABDM card. However, we need heat sources simulations for ABDM card to establish its thermal mapping. This led us to perform simulations at each ASIC that will allow us to understand the thermal ABDM map and find real solutions for each one of our complex system that contains 36 ABDM map, taking into account the different layers around ASIC. To do a transient simulation under NISA, we had to build a function of power modulation in time TIMEAMP. The maximum power generated in the ASIC is 0.6 W. We divided the power uniformly in the volume of the ASIC. This power was applied for 5 seconds to visualize the evolution and distribution of heat around the ASIC. The DBC (Dirichlet Boundary conditions) method was applied around the ABDM at 25°C and just after these simulations in NISA tool we will validate them by COMSOL tool, wich is a numerical calculation software for a modular finite element for modeling a wide variety of physical phenomena characterizing a real problem. It will also be a design tool with its ability to handle 3D geometries for complex systems.

Keywords: ABDM, APD, thermal mapping, complex system

Procedia PDF Downloads 264
1691 Exoskeleton Response During Infant Physiological Knee Kinematics And Dynamics

Authors: Breanna Macumber, Victor A. Huayamave, Emir A. Vela, Wangdo Kim, Tamara T. Chamber, Esteban Centeno

Abstract:

Spina bifida is a type of neural tube defect that affects the nervous system and can lead to problems such as total leg paralysis. Treatment requires physical therapy and rehabilitation. Robotic exoskeletons have been used for rehabilitation to train muscle movement and assist in injury recovery; however, current models focus on the adult populations and not on the infant population. The proposed framework aims to couple a musculoskeletal infant model with a robotic exoskeleton using vacuum-powered artificial muscles to provide rehabilitation to infants affected by spina bifida. The study that drove the input values for the robotic exoskeleton used motion capture technology to collect data from the spontaneous kicking movement of a 2.4-month-old infant lying supine. OpenSim was used to develop the musculoskeletal model, and Inverse kinematics was used to estimate hip joint angles. A total of 4 kicks (A, B, C, D) were selected, and the selection was based on range, transient response, and stable response. Kicks had at least 5° of range of motion with a smooth transient response and a stable period. The robotic exoskeleton used a Vacuum-Powered Artificial Muscle (VPAM) the structure comprised of cells that were clipped in a collapsed state and unclipped when desired to simulate infant’s age. The artificial muscle works with vacuum pressure. When air is removed, the muscle contracts and when air is added, the muscle relaxes. Bench testing was performed using a 6-month-old infant mannequin. The previously developed exoskeleton worked really well with controlled ranges of motion and frequencies, which are typical of rehabilitation protocols for infants suffering with spina bifida. However, the random kicking motion in this study contained high frequency kicks and was not able to accurately replicate all the investigated kicks. Kick 'A' had a greater error when compared to the other kicks. This study has the potential to advance the infant rehabilitation field.

Keywords: musculoskeletal modeling, soft robotics, rehabilitation, pediatrics

Procedia PDF Downloads 83
1690 3D Hybrid Multiphysics Lattice Boltzmann Model for Studying the Flow Behavior of Emulsions in Structured Rectangular Microchannels

Authors: Luma Al-Tamimi, Hassan Farhat, Wessam Hasan

Abstract:

A three-dimensional (3D) hybrid quasi-steady thermal lattice Boltzmann model is developed to couple the effects of surfactant, temperature, interfacial tension, and contact angle. This 3D model is an extended scheme of a previously introduced two-dimensional (2D) hybrid lattice Boltzmann model. The 3D model is used to study the combined multi-physics effects on emulsion systems flowing in rectangular microchannels with and without confinements, where the suspended phase is made of droplets, plugs, or a mixture of both. The simulation results show that emulsion systems with plugs as the suspended phase are more efficient than with droplets, whereas mixed systems that form large plugs through coalescence have even greater efficiency. The 3D contact angle model generates matching results to those of the 2D model, which were validated with experiments. Furthermore, the effects of various confinements on adhering single drop systems are investigated for delineating their influence on the power required for transporting the suspended phase through the channel. It is shown that the deeper the constriction is, the lower the system efficiency. Increasing the surfactant concentration or fluid temperature in a channel with confinement carries a substantial positive effect on oil droplet transportation.

Keywords: lattice Boltzmann method, thermal, contact angle, surfactants, high viscosity ratio, porous media

Procedia PDF Downloads 175
1689 Coherent Optical Tomography Imaging of Epidermal Hyperplasia in Vivo in a Mouse Model of Oxazolone Induced Atopic Dermatitis

Authors: Eric Lacoste

Abstract:

Laboratory animals are currently widely used as a model of human pathologies in dermatology such as atopic dermatitis (AD). These models provide a better understanding of the pathophysiology of this complex and multifactorial disease, the discovery of potential new therapeutic targets and the testing of the efficacy of new therapeutics. However, confirmation of the correct development of AD is mainly based on histology from skin biopsies requiring invasive surgery or euthanasia of the animals, plus slicing and staining protocols. However, there are currently accessible imaging technologies such as Optical Coherence Tomography (OCT), which allows non-invasive visualization of the main histological structures of the skin (like stratum corneum, epidermis, and dermis) and assessment of the dynamics of the pathology or efficacy of new treatments. Briefly, female immunocompetent hairless mice (SKH1 strain) were sensitized and challenged topically on back and ears for about 4 weeks. Back skin and ears thickness were measured using calliper at 3 occasions per week in complement to a macroscopic evaluation of atopic dermatitis lesions on back: erythema, scaling and excoriations scoring. In addition, OCT was performed on the back and ears of animals. OCT allows a virtual in-depth section (tomography) of the imaged organ to be made using a laser, a camera and image processing software allowing fast, non-contact and non-denaturing acquisitions of the explored tissues. To perform the imaging sessions, the animals were anesthetized with isoflurane, placed on a support under the OCT for a total examination time of 5 to 10 minutes. The results show a good correlation of the OCT technique with classical HES histology for skin lesions structures such as hyperkeratosis, epidermal hyperplasia, and dermis thickness. This OCT imaging technique can, therefore, be used in live animals at different times for longitudinal evaluation by repeated measurements of lesions in the same animals, in addition to the classical histological evaluation. Furthermore, this original imaging technique speeds up research protocols, reduces the number of animals and refines the use of the laboratory animal.

Keywords: atopic dermatitis, mouse model, oxzolone model, histology, imaging

Procedia PDF Downloads 132
1688 Numerical Determination of Transition of Cup Height between Hydroforming Processes

Authors: H. Selcuk Halkacı, Mevlüt Türköz, Ekrem Öztürk, Murat Dilmec

Abstract:

Various attempts concerning the low formability issue for lightweight materials like aluminium and magnesium alloys are being investigated in many studies. Advanced forming processes such as hydroforming is one of these attempts. In last decades sheet hydroforming process has an increasing interest, particularly in the automotive and aerospace industries. This process has many advantages such as enhanced formability, the capability to form complex parts, higher dimensional accuracy and surface quality, reduction of tool costs and reduced die wear compared to the conventional sheet metal forming processes. There are two types of sheet hydroforming. One of them is hydromechanical deep drawing (HDD) that is a special drawing process in which pressurized fluid medium is used instead of one of the die half compared to the conventional deep drawing (CDD) process. Another one is sheet hydroforming with die (SHF-D) in which blank is formed with the act of fluid pressure and it takes the shape of die half. In this study, transition of cup height according to cup diameter between the processes was determined by performing simulation of the processes in Finite Element Analysis. Firstly SHF-D process was simulated for 40 mm cup diameter at different cup heights chancing from 10 mm to 30 mm and the cup height to diameter ratio value in which it is not possible to obtain a successful forming was determined. Then the same ratio was checked for a different cup diameter of 60 mm. Then thickness distributions of the cups formed by SHF-D and HDD processes were compared for the cup heights. Consequently, it was found that the thickness distribution in HDD process in the analyses was more uniform.

Keywords: finite element analysis, HDD, hydroforming sheet metal forming, SHF-D

Procedia PDF Downloads 429
1687 Optimal Geothermal Borehole Design Guided By Dynamic Modeling

Authors: Hongshan Guo

Abstract:

Ground-source heat pumps provide stable and reliable heating and cooling when designed properly. The confounding effect of the borehole depth for a GSHP system, however, is rarely taken into account for any optimization: the determination of the borehole depth usually comes prior to the selection of corresponding system components and thereafter any optimization of the GSHP system. The depth of the borehole is important to any GSHP system because the shallower the borehole, the larger the fluctuation of temperature of the near-borehole soil temperature. This could lead to fluctuations of the coefficient of performance (COP) for the GSHP system in the long term when the heating/cooling demand is large. Yet the deeper the boreholes are drilled, the more the drilling cost and the operational expenses for the circulation. A controller that reads different building load profiles, optimizing for the smallest costs and temperature fluctuation at the borehole wall, eventually providing borehole depth as the output is developed. Due to the nature of the nonlinear dynamic nature of the GSHP system, it was found that between conventional optimal controller problem and model predictive control problem, the latter was found to be more feasible due to a possible history of both the trajectory during the iteration as well as the final output could be computed and compared against. Aside from a few scenarios of different weighting factors, the resulting system costs were verified with literature and reports and were found to be relatively accurate, while the temperature fluctuation at the borehole wall was also found to be within acceptable range. It was therefore determined that the MPC is adequate to optimize for the investment as well as the system performance for various outputs.

Keywords: geothermal borehole, MPC, dynamic modeling, simulation

Procedia PDF Downloads 287
1686 Power Allocation Algorithm for Orthogonal Frequency Division Multiplexing Based Cognitive Radio Networks

Authors: Bircan Demiral

Abstract:

Cognitive radio (CR) is the promising technology that addresses the spectrum scarcity problem for future wireless communications. Orthogonal Frequency Division Multiplexing (OFDM) technology provides more power band ratios for cognitive radio networks (CRNs). While CR is a solution to the spectrum scarcity, it also brings up the capacity problem. In this paper, a novel power allocation algorithm that aims at maximizing the sum capacity in the OFDM based cognitive radio networks is proposed. Proposed allocation algorithm is based on the previously developed water-filling algorithm. To reduce the computational complexity calculating in water filling algorithm, proposed algorithm allocates the total power according to each subcarrier. The power allocated to the subcarriers increases sum capacity. To see this increase, Matlab program was used, and the proposed power allocation was compared with average power allocation, water filling and general power allocation algorithms. The water filling algorithm performed worse than the proposed algorithm while it performed better than the other two algorithms. The proposed algorithm is better than other algorithms in terms of capacity increase. In addition the effect of the change in the number of subcarriers on capacity was discussed. Simulation results show that the increase in the number of subcarrier increases the capacity.

Keywords: cognitive radio network, OFDM, power allocation, water filling

Procedia PDF Downloads 137
1685 A Framework Based Blockchain for the Development of a Social Economy Platform

Authors: Hasna Elalaoui Elabdallaoui, Abdelaziz Elfazziki, Mohamed Sadgal

Abstract:

Outlines: The social economy is a moral approach to solidarity applied to the projects’ development. To reconcile economic activity and social equity, crowdfunding is as an alternative means of financing social projects. Several collaborative blockchain platforms exist. It eliminates the need for a central authority or an inconsiderate middleman. Also, the costs for a successful crowdfunding campaign are reduced, since there is no commission to be paid to the intermediary. It improves the transparency of record keeping and delegates authority to authorities who may be prone to corruption. Objectives: The objectives are: to define a software infrastructure for projects’ participatory financing within a social and solidarity economy, allowing transparent, secure, and fair management and to have a financial mechanism that improves financial inclusion. Methodology: The proposed methodology is: crowdfunding platforms literature review, financing mechanisms literature review, requirements analysis and project definition, a business plan, Platform development process and implementation technology, and testing an MVP. Contributions: The solution consists of proposing a new approach to crowdfunding based on Islamic financing, which is the principle of Mousharaka inspired by Islamic financing, which presents a financial innovation that integrates ethics and the social dimension into contemporary banking practices. Conclusion: Crowdfunding platforms need to secure projects and allow only quality projects but also offer a wide range of options to funders. Thus, a framework based on blockchain technology and Islamic financing is proposed to manage this arbitration between quality and quantity of options. The proposed financing system, "Musharaka", is a mode of financing that prohibits interests and uncertainties. The implementation is offered on the secure Ethereum platform as investors sign and initiate transactions for contributions using their digital signature wallet managed by a cryptography algorithm and smart contracts. Our proposal is illustrated by a crop irrigation project in the Marrakech region.

Keywords: social economy, Musharaka, blockchain, smart contract, crowdfunding

Procedia PDF Downloads 77
1684 Monetary Policy and Assets Prices in Nigeria: Testing for the Direction of Relationship

Authors: Jameelah Omolara Yaqub

Abstract:

One of the main reasons for the existence of central bank is that it is believed that central banks have some influence on private sector decisions which will enable the Central Bank to achieve some of its objectives especially that of stable price and economic growth. By the assumption of the New Keynesian theory that prices are fully flexible in the short run, the central bank can temporarily influence real interest rate and, therefore, have an effect on real output in addition to nominal prices. There is, therefore, the need for the Central Bank to monitor, respond to, and influence private sector decisions appropriately. This thus shows that the Central Bank and the private sector will both affect and be affected by each other implying considerable interdependence between the sectors. The interdependence may be simultaneous or not depending on the level of information, readily available and how sensitive prices are to agents’ expectations about the future. The aim of this paper is, therefore, to determine whether the interdependence between asset prices and monetary policy are simultaneous or not and how important is this relationship. Studies on the effects of monetary policy have largely used VAR models to identify the interdependence but most have found small effects of interaction. Some earlier studies have ignored the possibility of simultaneous interdependence while those that have allowed for simultaneous interdependence used data from developed economies only. This study, therefore, extends the literature by using data from a developing economy where information might not be readily available to influence agents’ expectation. In this study, the direction of relationship among variables of interest will be tested by carrying out the Granger causality test. Thereafter, the interaction between asset prices and monetary policy in Nigeria will be tested. Asset prices will be represented by the NSE index as well as real estate prices while monetary policy will be represented by money supply and the MPR respectively. The VAR model will be used to analyse the relationship between the variables in order to take account of potential simultaneity of interdependence. The study will cover the period between 1980 and 2014 due to data availability. It is believed that the outcome of the research will guide monetary policymakers especially the CBN to effectively influence the private sector decisions and thereby achieve its objectives of price stability and economic growth.

Keywords: asset prices, granger causality, monetary policy rate, Nigeria

Procedia PDF Downloads 220
1683 Discrete Element Modeling of the Effect of Particle Shape on Creep Behavior of Rockfills

Authors: Yunjia Wang, Zhihong Zhao, Erxiang Song

Abstract:

Rockfills are widely used in civil engineering, such as dams, railways, and airport foundations in mountain areas. A significant long-term post-construction settlement may affect the serviceability or even the safety of rockfill infrastructures. The creep behavior of rockfills is influenced by a number of factors, such as particle size, strength and shape, water condition and stress level. However, the effect of particle shape on rockfill creep still remains poorly understood, which deserves a careful investigation. Particle-based discrete element method (DEM) was used to simulate the creep behavior of rockfills under different boundary conditions. Both angular and rounded particles were considered in this numerical study, in order to investigate the influence of particle shape. The preliminary results showed that angular particles experience more breakages and larger creep strains under one-dimensional compression than rounded particles. On the contrary, larger creep strains were observed in he rounded specimens in the direct shear test. The mechanism responsible for this difference is that the possibility of the existence of key particle in rounded particles is higher than that in angular particles. The above simulations demonstrate that the influence of particle shape on the creep behavior of rockfills can be simulated by DEM properly. The method of DEM simulation may facilitate our understanding of deformation properties of rockfill materials.

Keywords: rockfills, creep behavior, particle crushing, discrete element method, boundary conditions

Procedia PDF Downloads 313
1682 Analyzing Façade Scenarios and Daylight Levels in the Reid Building: A Reflective Case Study on the Designed Daylight under Overcast Sky

Authors: Eman Mayah, Raid Hanna

Abstract:

This study presents the use of daylight in the case study of the Reid building at the Glasgow School of Art in the city of Glasgow, UK. In Nordic countries, daylight is one of the main considerations within building design, especially in the face of long, lightless winters. A shortage of daylight, contributing to dark and gloomy conditions, necessitates that designs incorporate strong daylight performance. As such, the building in question is designed to capture natural light for varying needs, where studios are located on the North and South façades. The study’s approach presents an analysis of different façade scenarios, where daylight from the North is observed, analyzed and compared with the daylight from the South façade for various design studios in the building. The findings then are correlated with the results of daylight levels from the daylight simulation program (Autodesk Ecotect Analysis) for the investigated studios. The study finds there to be a dramatic difference in daylight nature and levels between the North and South façades, where orientation, obstructions and designed façade fenestrations have major effects on the findings. The study concludes that some of the studios positioned on the North façade do not have a desirable quality of diffused northern light, due to the outside building’s obstructions, area and volume of the studio and the shadow effect of the designed mezzanine floor in the studios.

Keywords: daylight levels, educational building, Façade fenestration, overcast weather

Procedia PDF Downloads 405
1681 Application and Utility of the Rale Score for Assessment of Clinical Severity in Covid-19 Patients

Authors: Naridchaya Aberdour, Joanna Kao, Anne Miller, Timothy Shore, Richard Maher, Zhixin Liu

Abstract:

Background: COVID-19 has and continues to be a strain on healthcare globally, with the number of patients requiring hospitalization exceeding the level of medical support available in many countries. As chest x-rays are the primary respiratory radiological investigation, the Radiological Assessment of Lung Edema (RALE) score was used to quantify the extent of pulmonary infection on baseline imaging. Assessment of RALE score's reproducibility and associations with clinical outcome parameters were then evaluated to determine implications for patient management and prognosis. Methods: A retrospective study was performed with the inclusion of patients testing positive for COVID-19 on nasopharyngeal swab within a single Local Health District in Sydney, Australia and baseline x-ray imaging acquired between January to June 2020. Two independent Radiologists viewed the studies and calculated the RALE scores. Clinical outcome parameters were collected and statistical analysis was performed to assess RALE score reproducibility and possible associations with clinical outcomes. Results: A total of 78 patients met inclusion criteria with the age range of 4 to 91 years old. RALE score concordance between the two independent Radiologists was excellent (interclass correlation coefficient = 0.93, 95% CI = 0.88-0.95, p<0.005). Binomial logistics regression identified a positive correlation with hospital admission (1.87 OR, 95% CI= 1.3-2.6, p<0.005), oxygen requirement (1.48 OR, 95% CI= 1.2-1.8, p<0.005) and invasive ventilation (1.2 OR, 95% CI= 1.0-1.3, p<0.005) for each 1-point increase in RALE score. For each one year increased in age, there was a negative correlation with recovery (0.05 OR, 95% CI= 0.92-1.0, p<0.01). RALE scores above three were positively associated with hospitalization (Youden Index 0.61, sensitivity 0.73, specificity 0.89) and above six were positively associated with ICU admission (Youden Index 0.67, sensitivity 0.91, specificity 0.78). Conclusion: The RALE score can be used as a surrogate to quantify the extent of COVID-19 infection and has an excellent inter-observer agreement. The RALE score could be used to prognosticate and identify patients at high risk of deterioration. Threshold values may also be applied to predict the likelihood of hospital and ICU admission.

Keywords: chest radiography, coronavirus, COVID-19, RALE score

Procedia PDF Downloads 178
1680 Dynamic Analysis of a Moderately Thick Plate on Pasternak Type Foundation under Impact and Moving Loads

Authors: Neslihan Genckal, Reha Gursoy, Vedat Z. Dogan

Abstract:

In this study, dynamic responses of composite plates on elastic foundations subjected to impact and moving loads are investigated. The first order shear deformation (FSDT) theory is used for moderately thick plates. Pasternak-type (two-parameter) elastic foundation is assumed. Elastic foundation effects are integrated into the governing equations. It is assumed that plate is first hit by a mass as an impact type loading then the mass continues to move on the composite plate as a distributed moving loading, which resembles the aircraft landing on airport pavements. Impact and moving loadings are modeled by a mass-spring-damper system with a wheel. The wheel is assumed to be continuously in contact with the plate after impact. The governing partial differential equations of motion for displacements are converted into the ordinary differential equations in the time domain by using Galerkin’s method. Then, these sets of equations are solved by using the Runge-Kutta method. Several parameters such as vertical and horizontal velocities of the aircraft, volume fractions of the steel rebar in the reinforced concrete layer, and the different touchdown locations of the aircraft tire on the runway are considered in the numerical simulation. The results are compared with those of the ABAQUS, which is a commercial finite element code.

Keywords: elastic foundation, impact, moving load, thick plate

Procedia PDF Downloads 313
1679 Exploration of Copper Fabric in Non-Asbestos Organic Brake-Pads for Thermal Conductivity Enhancement

Authors: Vishal Mahale, Jayashree Bijwe, Sujeet K. Sinha

Abstract:

Range of thermal conductivity (TC) of Friction Materials (FMs) is a critical issue since lower TC leads to accumulation of frictional heat on the working surface, which results in excessive fade while higher TC leads to excessive heat flow towards back-plate resulting in boiling of brake-fluid leading to ‘spongy brakes’. This phenomenon prohibits braking action, which is most undesirable. Therefore, TC of the FMs across the brake pads should not be high while along the brake pad, it should be high. To enhance TC, metals in the forms of powder and fibers are used in the FMs. Apart from TC improvement, metals provide strength and structural integrity to the composites. Due to higher TC Copper (Cu) powder/fiber is a most preferred metallic ingredient in FM industry. However, Cu powders/fibers are responsible for metallic wear debris generation, which has harmful effects on aquatic organisms. Hence to get rid of a problem of metallic wear debris generation and to keep the positive effect of TC improvement, incorporation of Cu fabric in NAO brake-pads can be an innovative solution. Keeping this in view, two realistic multi-ingredient FM composites with identical formulations were developed in the form of brake-pads. Out of which one composite series consisted of a single layer of Cu fabric in the body of brake-pad and designated as C1 while double layer of Cu fabric was incorporated in another brake-pad series with designation of C2. Distance of Cu fabric layer from the back-plate was kept constant for C1 and C2. One more composite (C0) was developed without Cu fabric for the sake of comparison. Developed composites were characterized for physical properties. Tribological performance was evaluated on full scale inertia dynamometer by following JASO C 406 testing standard. It was concluded that Cu fabric successfully improved fade resistance by increasing conductivity of the composite and also showed slight improvement in wear resistance. Worn surfaces of pads and disc were analyzed by SEM and EDAX to study wear mechanism.

Keywords: brake inertia dynamometer, copper fabric, non-asbestos organic (NAO) friction materials, thermal conductivity enhancement

Procedia PDF Downloads 131
1678 Investigation of the Mechanical and Thermal Properties of a Silver Oxalate Nanoporous Structured Sintered Joint for Micro-joining in Relation to the Sintering Process Parameters

Authors: L. Vivet, L. Benabou, O. Simon

Abstract:

With highly demanding applications in the field of power electronics, there is an increasing need to have interconnection materials with properties that can ensure both good mechanical assembly and high thermal/electrical conductivities. So far, lead-free solders have been considered an attractive solution, but recently, sintered joints based on nano-silver paste have been used for die attach and have proved to be a promising solution offering increased performances in high-temperature applications. In this work, the main parameters of the bonding process using silver oxalates are studied, i.e., the heating rate and the bonding pressure mainly. Their effects on both the mechanical and thermal properties of the sintered layer are evaluated following an experimental design. Pairs of copper substrates with gold metallization are assembled through the sintering process to realize the samples that are tested using a micro-traction machine. In addition, the obtained joints are examined through microscopy to identify the important microstructural features in relation to the measured properties. The formation of an intermetallic compound at the junction between the sintered silver layer and the gold metallization deposited on copper is also analyzed. Microscopy analysis exhibits a nanoporous structure of the sintered material. It is found that higher temperature and bonding pressure result in higher densification of the sintered material, with higher thermal conductivity of the joint but less mechanical flexibility to accommodate the thermo-mechanical stresses arising during service. The experimental design allows hence the determination of the optimal process parameters to reach sufficient thermal/mechanical properties for a given application. It is also found that the interphase formed between silver and gold metallization is the location where the fracture occurred after the mechanical testing, suggesting that the inter-diffusion mechanism between the different elements of the assembly leads to the formation of a relatively brittle compound.

Keywords: nanoporous structure, silver oxalate, sintering, mechanical strength, thermal conductivity, microelectronic packaging

Procedia PDF Downloads 93
1677 Corneal Confocal Microscopy As a Surrogate Marker of Neuronal Pathology In Schizophrenia

Authors: Peter W. Woodruff, Georgios Ponirakis, Reem Ibrahim, Amani Ahmed, Hoda Gad, Ioannis N. Petropoulos, Adnan Khan, Ahmed Elsotouhy, Surjith Vattoth, Mahmoud K. M. Alshawwaf, Mohamed Adil Shah Khoodoruth, Marwan Ramadan, Anjushri Bhagat, James Currie, Ziyad Mahfoud, Hanadi Al Hamad, Ahmed Own, Peter Haddad, Majid Alabdulla, Rayaz A. Malik

Abstract:

Introduction:- We aimed to test the hypothesis that, using corneal confocal microscopy (a non-invasive method for assessing corneal nerve fibre integrity), patients with schizophrenia would show neuronal abnormalities compared with healthy participants. Schizophrenia is a neurodevelopmental and progressive neurodegenerative disease, for which there are no validated biomarkers. Corneal confocal microscopy (CCM) is a non-invasive ophthalmic imaging biomarker that can be used to detect neuronal abnormalities in neuropsychiatric syndromes. Methods:- Patients with schizophrenia (DSM-V criteria) without other causes of peripheral neuropathy and healthy controls underwent CCM, vibration perception threshold (VPT) and sudomotor function testing. The diagnostic accuracy of CCM in distinguishing patients from controls was assessed using the area under the curve (AUC) of the Receiver Operating Characterstics (ROC) curve. Findings:- Participants with schizophrenia (n=17) and controls (n=38) with comparable age (35.7±8.5 vs 35.6±12.2, P=0.96) were recruited. Patients with schizophrenia had significantly higher body weight (93.9±25.5 vs 77.1±10.1, P=0.02), lower Low Density Lipoproteins (2.6±1.0 vs 3.4±0.7, P=0.02), but comparable systolic and diastolic blood pressure, HbA1c, total cholesterol, triglycerides and High Density Lipoproteins were comparable with control participants. Patients with schizophrenia had significantly lower corneal nerve fiber density (CNFD, fibers/mm2) (23.5±7.8 vs 35.6±6.5, p<0.0001), branch density (CNBD, branches/mm2) (34.4±26.9 vs 98.1±30.6, p<0.0001), and fiber length (CNFL, mm/mm2) (14.3±4.7 vs 24.2±3.9, p<0.0001) but no difference in VPT (6.1±3.1 vs 4.5±2.8, p=0.12) and electrochemical skin conductance (61.0±24.0 vs 68.9±12.3, p=0.23) compared with controls. The diagnostic accuracy of CNFD, CNBD and CNFL to distinguish patients with schizophrenia from healthy controls were, according to the AUC, (95% CI): 87.0% (76.8-98.2), 93.2% (84.2-102.3), 93.2% (84.4-102.1), respectively. Conclusion:- In conclusion, CCM can be used to help identify neuronal changes and has a high diagnostic accuracy to distinguish subjects with schizophrenia from healthy controls.

Keywords:

Procedia PDF Downloads 275
1676 Intelligent Agent-Based Model for the 5G mmWave O2I Technology Adoption

Authors: Robert Joseph M. Licup

Abstract:

The deployment of the fifth-generation (5G) mobile system through mmWave frequencies is the new solution in the requirement to provide higher bandwidth readily available for all users. The usage pattern of the mobile users has moved towards either the work from home or online classes set-up because of the pandemic. Previous mobile technologies can no longer meet the high speed, and bandwidth requirement needed, given the drastic shift of transactions to the home. The millimeter-wave (mmWave) underutilized frequency is utilized by the fifth-generation (5G) cellular networks that support multi-gigabit-per-second (Gbps) transmission. However, due to its short wavelengths, high path loss, directivity, blockage sensitivity, and narrow beamwidth are some of the technical challenges that need to be addressed. Different tools, technologies, and scenarios are explored to support network design, accurate channel modeling, implementation, and deployment effectively. However, there is a big challenge on how the consumer will adopt this solution and maximize the benefits offered by the 5G Technology. This research proposes to study the intricacies of technology diffusion, individual attitude, behaviors, and how technology adoption will be attained. The agent based simulation model shaped by the actual applications, technology solution, and related literature was used to arrive at a computational model. The research examines the different attributes, factors, and intricacies that can affect each identified agent towards technology adoption.

Keywords: agent-based model, AnyLogic, 5G O21, 5G mmWave solutions, technology adoption

Procedia PDF Downloads 108
1675 Comparative Analysis of Single Versus Multi-IRS Assisted Multi-User Wireless Communication System

Authors: Ayalew Tadese Kibret, Belayneh Sisay Alemu, Amare Kassaw Yimer

Abstract:

Intelligent reflecting surfaces (IRSs) are considered to be a key enabling technology for sixth-generation (6G) wireless networks. IRSs are electromagnetic (EM) surfaces that are fabricated and have integrated electronics, electronically controlled processes, and particularly wireless communication features. IRSs operate without the need for complex signal processing and the encoding and decoding steps that improve the signal quality at the receiver. Improving vital performance parameters such as energy efficiency (EE) and spectral efficiency (SE) have frequently been the primary goals of research in order to meet the increasing requirements for advanced services in the future 6G communications. In this research, we conduct a comparative analysis on single and multi-IRS wireless communication networks using energy and spectrum efficiency. The energy efficiency versus user distance, energy efficiency versus signal to noise ratio, and spectral efficiency versus user distance are the basis for our result with 1, 2, 4, and 6 IRSs. According to the results of our simulation, in terms of energy and spectral efficiency, six IRS perform better than four, two, and single IRS. Overall, our results suggest that multi-IRS-assisted wireless communication systems outperform single IRS systems in terms of communication performance.

Keywords: sixth-generation (6G), wireless networks, intelligent reflecting surfaces, energy efficiency, spectral efficiency

Procedia PDF Downloads 26
1674 Early Warning System of Financial Distress Based On Credit Cycle Index

Authors: Bi-Huei Tsai

Abstract:

Previous studies on financial distress prediction choose the conventional failing and non-failing dichotomy; however, the distressed extent differs substantially among different financial distress events. To solve the problem, “non-distressed”, “slightly-distressed” and “reorganization and bankruptcy” are used in our article to approximate the continuum of corporate financial health. This paper explains different financial distress events using the two-stage method. First, this investigation adopts firm-specific financial ratios, corporate governance and market factors to measure the probability of various financial distress events based on multinomial logit models. Specifically, the bootstrapping simulation is performed to examine the difference of estimated misclassifying cost (EMC). Second, this work further applies macroeconomic factors to establish the credit cycle index and determines the distressed cut-off indicator of the two-stage models using such index. Two different models, one-stage and two-stage prediction models, are developed to forecast financial distress, and the results acquired from different models are compared with each other, and with the collected data. The findings show that the two-stage model incorporating financial ratios, corporate governance and market factors has the lowest misclassification error rate. The two-stage model is more accurate than the one-stage model as its distressed cut-off indicators are adjusted according to the macroeconomic-based credit cycle index.

Keywords: Multinomial logit model, corporate governance, company failure, reorganization, bankruptcy

Procedia PDF Downloads 377
1673 Understanding the Complexities of Consumer Financial Spinning

Authors: Olivier Mesly

Abstract:

This research presents a conceptual framework termed “Consumer Financial Spinning” (CFS) to analyze consumer behavior in the financial/economic markets. This phenomenon occurs when consumers of high-stakes financial products accumulate unsustainable debt, leading them to detach from their initial financial hierarchy of needs, wealth-related goals, and preferences regarding their household portfolio of assets. The daring actions of these consumers, forming a dark financial triangle, are characterized by three behaviors: overconfidence, the use of rationed rationality, and deceitfulness. We show that we can incorporate CFS into the traditional CAPM and Markovitz’ portfolio optimization models to create a framework that explains such market phenomena as the global financial crisis, highlighting the antecedents and consequences of ill-conceived speculation. Because this is a conceptual paper, there is no methodology with respect to ground studies. However, we apply modeling principles derived from the data percolation methodology, which contains tenets explicating how to structure concepts. A simulation test of the proposed framework is conducted; it demonstrates the conditions under which the relationship between expected returns and risk may deviate from linearity. The analysis and conceptual findings are particularly relevant both theoretically and pragmatically as they shed light on the psychological conditions that drive intense speculation, which can lead to market turmoil. Armed with such understanding, regulators are better equipped to propose solutions before the economic problems become out of control.

Keywords: consumer financial spinning, rationality, deceitfulness, overconfidence, CAPM

Procedia PDF Downloads 48
1672 A Multi-Objective Programming Model to Supplier Selection and Order Allocation Problem in Stochastic Environment

Authors: Rouhallah Bagheri, Morteza Mahmoudi, Hadi Moheb-Alizadeh

Abstract:

This paper aims at developing a multi-objective model for supplier selection and order allocation problem in stochastic environment, where purchasing cost, percentage of delivered items with delay and percentage of rejected items provided by each supplier are supposed to be stochastic parameters following any arbitrary probability distribution. In this regard, dependent chance programming is used which maximizes probability of the event that total purchasing cost, total delivered items with delay and total rejected items are less than or equal to pre-determined values given by decision maker. The abovementioned stochastic multi-objective programming problem is then transformed into a stochastic single objective programming problem using minimum deviation method. In the next step, the further problem is solved applying a genetic algorithm, which performs a simulation process in order to calculate the stochastic objective function as its fitness function. Finally, the impact of stochastic parameters on the given solution is examined via a sensitivity analysis exploiting coefficient of variation. The results show that whatever stochastic parameters have greater coefficients of variation, the value of the objective function in the stochastic single objective programming problem is deteriorated.

Keywords: supplier selection, order allocation, dependent chance programming, genetic algorithm

Procedia PDF Downloads 313
1671 Encoded Fiber Optic Sensors for Simultaneous Multipoint Sensing

Authors: C. Babu Rao, Pandian Chelliah

Abstract:

Owing to their reliability, a number of fluorescent spectra based fiber optic sensors have been developed for detection and identification of hazardous chemicals such as explosives, narcotics etc. In High security regions, such as airports, it is important to monitor simultaneously multiple locations. This calls for deployment of a portable sensor at each location. However, the selectivity and sensitivity of these techniques depends on the spectral resolution of the spectral analyzer. The better the resolution the larger the repertoire of chemicals that can be detected. A portable unit will have limitations in meeting these requirements. Optical fibers can be employed for collecting and transmitting spectral signal from the portable sensor head to a sensitive central spectral analyzer (CSA). For multipoint sensing, optical multiplexing of multiple sensor heads with CSA has to be adopted. However with multiplexing, when one sensor head is connected to CSA, the rest may remain unconnected for the turn-around period. The larger the number of sensor heads the larger this turn-around time will be. To circumvent this imitation, we propose in this paper, an optical encoding methodology to use multiple portable sensor heads connected to a single CSA. Each portable sensor head is assigned an unique address. Spectra of every chemical detected through this sensor head, are encoded by its unique address and can be identified at the CSA end. The methodology proposed is demonstrated through a simulation using Matlab SIMULINK.

Keywords: optical encoding, fluorescence, multipoint sensing

Procedia PDF Downloads 710
1670 Neighbourhood Walkability and Quality of Life: The Mediating Role of Place Adherence and Social Interaction

Authors: Michał Jaśkiewicz

Abstract:

The relation between walkability, place adherence, social relations and quality of life was explored in a Polish context. A considerable number of studies have suggested that environmental factors may influence the quality of life through indirect pathways. The list of possible psychological mediators includes social relations and identity-related variables. Based on the results of Study 1, local identity is a significant mediator in the relationship between neighbourhood walkability and quality of life. It was assumed that pedestrian-oriented neighbourhoods enable residents to interact and that these spontaneous interactions can help to strengthen a sense of local identity, thus influencing the quality of life. We, therefore, conducted further studies, testing the relationship experimentally in studies 2a and 2b. Participants were exposed to (2a) photos of walkable/non-walkable neighbourhoods or (2b) descriptions of high/low-walkable neighbourhoods. They were then asked to assess the walkability of the neighbourhoods and to evaluate their potential social relations and quality of life in these places. In both studies, social relations with neighbours turned out to be a significant mediator between walkability and quality of life. In Study 3, we implemented the measure of overlapping individual and communal identity (fusion with the neighbourhood) and willingness to collective action as mediators. Living in a walkable neighbourhood was associated with identity fusion with that neighbourhood. Participants who felt more fused expressed greater willingness to engage in collective action with other neighbours. Finally, this willingness was positively related to the quality of life in the city. In Study 4, we used commuting time (an aspect of walkability related to the time that people spend travelling to work) as the independent variable. The results showed that a shorter average daily commuting time was linked to more frequent social interactions in the neighbourhood. Individuals who assessed their social interactions as more frequent expressed a stronger city identification, which was in turn related to quality of life. To sum up, our research replicated and extended previous findings on the association between walkability and well-being measures. We introduced potential mediators of this relationship: social interactions in the neighbourhood and identity-related variables.

Keywords: walkability, quality of life, social relations, analysis of mediation

Procedia PDF Downloads 327
1669 3D Steady and Transient Centrifugal Pump Flow within Ansys CFX and OpenFOAM

Authors: Clement Leroy, Guillaume Boitel

Abstract:

This paper presents a comparative benchmarking review of a steady and transient three-dimensional (3D) flow computations in centrifugal pump using commercial (AnsysCFX) and open source (OpenFOAM) computational fluid dynamics (CFD) software. In centrifugal rotor-dynamic pump, the fluid enters in the impeller along to the rotating axis to be accelerated in order to increase the pressure, flowing radially outward into another stage, vaned diffuser or volute casing, from where it finally exits into a downstream pipe. Simulations are carried out at the best efficiency point (BEP) and part load, for single-phase flow with several turbulence models. The results are compared with overall performance report from experimental data. The use of CFD technology in industry is still limited by the high computational costs, and even more by the high cost of commercial CFD software and high-performance computing (HPC) licenses. The main objectives of the present study are to define OpenFOAM methodology for high-quality 3D steady and transient turbomachinery CFD simulation to conduct a thorough time-accurate performance analysis. On the other hand a detailed comparisons between computational methods, features on latest Ansys release 18 and OpenFOAM is investigated to assess the accuracy and industrial applications of those solvers. Finally an automated connected workflow (IoT) for turbine blade applications is presented.

Keywords: benchmarking, CFX, internet of things, openFOAM, time-accurate, turbomachinery

Procedia PDF Downloads 205
1668 The Persistence of Abnormal Return on Assets: An Exploratory Analysis of the Differences between Industries and Differences between Firms by Country and Sector

Authors: José Luis Gallizo, Pilar Gargallo, Ramon Saladrigues, Manuel Salvador

Abstract:

This study offers an exploratory statistical analysis of the persistence of annual profits across a sample of firms from different European Union (EU) countries. To this end, a hierarchical Bayesian dynamic model has been used which enables the annual behaviour of those profits to be broken down into a permanent structural and a transitory component, while also distinguishing between general effects affecting the industry as a whole to which each firm belongs and specific effects affecting each firm in particular. This breakdown enables the relative importance of those fundamental components to be more accurately evaluated by country and sector. Furthermore, Bayesian approach allows for testing different hypotheses about the homogeneity of the behaviour of the above components with respect to the sector and the country where the firm develops its activity. The data analysed come from a sample of 23,293 firms in EU countries selected from the AMADEUS data-base. The period analysed ran from 1999 to 2007 and 21 sectors were analysed, chosen in such a way that there was a sufficiently large number of firms in each country sector combination for the industry effects to be estimated accurately enough for meaningful comparisons to be made by sector and country. The analysis has been conducted by sector and by country from a Bayesian perspective, thus making the study more flexible and realistic since the estimates obtained do not depend on asymptotic results. In general terms, the study finds that, although the industry effects are significant, more important are the firm specific effects. That importance varies depending on the sector or the country in which the firm carries out its activity. The influence of firm effects accounts for around 81% of total variation and display a significantly lower degree of persistence, with adjustment speeds oscillating around 34%. However, this pattern is not homogeneous but depends on the sector and country analysed. Industry effects depends also on sector and country analysed have a more marginal importance, being significantly more persistent, with adjustment speeds oscillating around 7-8% with this degree of persistence being very similar for most of sectors and countries analysed.

Keywords: dynamic models, Bayesian inference, MCMC, abnormal returns, persistence of profits, return on assets

Procedia PDF Downloads 401
1667 Swastika Shape Multiband Patch Antenna for Wireless Applications on Low Cost Substrate

Authors: Md. Samsuzzaman, M. T. Islam, J. S. Mandeep, N. Misran

Abstract:

In this article, a compact simple structure modified Swastika shape patch multiband antenna on a substrate of available low cost polymer resin composite material is designed for Wi-Fi and WiMAX applications. The substrate material consists of an epoxy matrix reinforced by woven glass. The designed micro-strip line fed compact antenna comprises of a planar wide square slot ground with four slits and Swastika shape radiation patch with a rectangular slot. The effect of the different substrate materials on the reflection coefficients of the proposed antennas was also analyzed. It can be clearly seen that the proposed antenna provides a wider bandwidth and acceptable return loss value compared to other reported materials. The simulation results exhibits that the antenna has an impedance bandwidth with -10 dB return loss at 3.01-3.89 GHz and 4.88-6.10 GHz which can cover both the WLAN, WiMAX and public safety WLAN bands. The proposed swastika shape antenna was designed and analyzed by using a finite element method based simulator HFSS and designed on a low cost FR4 (polymer resin composite material) printed circuit board. The electrical performances and superior frequency characteristics make the proposed material antenna desirable for wireless communications.

Keywords: epoxy resin polymer, multiband, swastika shaped, wide slot, WLAN/WiMAX

Procedia PDF Downloads 452
1666 NOx Prediction by Quasi-Dimensional Combustion Model of Hydrogen Enriched Compressed Natural Gas Engine

Authors: Anas Rao, Hao Duan, Fanhua Ma

Abstract:

The dependency on the fossil fuels can be minimized by using the hydrogen enriched compressed natural gas (HCNG) in the transportation vehicles. However, the NOx emissions of HCNG engines are significantly higher, and this turned to be its major drawback. Therefore, the study of NOx emission of HCNG engines is a very important area of research. In this context, the experiments have been performed at the different hydrogen percentage, ignition timing, air-fuel ratio, manifold-absolute pressure, load and engine speed. Afterwards, the simulation has been accomplished by the quasi-dimensional combustion model of HCNG engine. In order to investigate the NOx emission, the NO mechanism has been coupled to the quasi-dimensional combustion model of HCNG engine. The three NOx mechanism: the thermal NOx, prompt NOx and N2O mechanism have been used to predict NOx emission. For the validation purpose, NO curve has been transformed into NO packets based on the temperature difference of 100 K for the lean-burn and 60 K for stoichiometric condition. While, the width of the packet has been taken as the ratio of crank duration of the packet to the total burnt duration. The combustion chamber of the engine has been divided into three zones, with the zone equal to the product of summation of NO packets and space. In order to check the accuracy of the model, the percentage error of NOx emission has been evaluated, and it lies in the range of ±6% and ±10% for the lean-burn and stoichiometric conditions respectively. Finally, the percentage contribution of each NO formation has been evaluated.

Keywords: quasi-dimensional combustion , thermal NO, prompt NO, NO packet

Procedia PDF Downloads 251
1665 Testing the Life Cycle Theory on the Capital Structure Dynamics of Trade-Off and Pecking Order Theories: A Case of Retail, Industrial and Mining Sectors

Authors: Freddy Munzhelele

Abstract:

Setting: the empirical research has shown that the life cycle theory has an impact on the firms’ financing decisions, particularly the dividend pay-outs. Accordingly, the life cycle theory posits that as a firm matures, it gets to a level and capacity where it distributes more cash as dividends. On the other hand, the young firms prioritise investment opportunities sets and their financing; thus, they pay little or no dividends. The research on firms’ financing decisions also demonstrated, among others, the adoption of trade-off and pecking order theories on the dynamics of firms capital structure. The trade-off theory talks to firms holding a favourable position regarding debt structures particularly as to the cost and benefits thereof; and pecking order is concerned with firms preferring a hierarchical order as to choosing financing sources. The case of life cycle hypothesis explaining the financial managers’ decisions as regards the firms’ capital structure dynamics appears to be an interesting link, yet this link has been neglected in corporate finance research. If this link is to be explored as an empirical research, the financial decision-making alternatives will be enhanced immensely, since no conclusive evidence has been found yet as to the dynamics of capital structure. Aim: the aim of this study is to examine the impact of life cycle theory on the capital structure dynamics trade-off and pecking order theories of firms listed in retail, industrial and mining sectors of the JSE. These sectors are among the key contributors to the GDP in the South African economy. Design and methodology: following the postpositivist research paradigm, the study is quantitative in nature and utilises secondary data obtainable from the financial statements of sampled firm for the period 2010 – 2022. The firms’ financial statements will be extracted from the IRESS database. Since the data will be in panel form, a combination of the static and dynamic panel data estimators will used to analyse data. The overall data analyses will be done using STATA program. Value add: this study directly investigates the link between the life cycle theory and the dynamics of capital structure decisions, particularly the trade-off and pecking order theories.

Keywords: life cycle theory, trade-off theory, pecking order theory, capital structure, JSE listed firms

Procedia PDF Downloads 61