Search results for: travel cost techniques
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 12344

Search results for: travel cost techniques

1184 Enhancement of Hardness Related Properties of Grey Cast Iron Powder Reinforced AA7075 Metal Matrix Composites Through T6 and T8 Heat Treatments

Authors: S. S. Sharma, P. R. Prabhu, K. Jagannath, Achutha Kini U., Gowri Shankar M. C.

Abstract:

In present global scenario, aluminum alloys are coining the attention of many innovators as competing structural materials for automotive and space applications. Comparing to other challenging alloys, especially, 7xxx series aluminum alloys have been studied seriously because of their benefits such as moderate strength; better deforming characteristics, excellent chemical decay resistance, and affordable cost. 7075 Al-alloys have been used in the transportation industry for the fabrication of several types of automobile parts, such as wheel covers, panels and structures. It is expected that substitution of such aluminum alloys for steels will result in great improvements in energy economy, durability and recyclability. However, it is necessary to improve the strength and the formability levels at low temperatures in aluminium alloys for still better applications. Aluminum–Zinc–Magnesium with or without other wetting agent denoted as 7XXX series alloys are medium strength heat treatable alloys. Cu, Mn and Si are the other solute elements which contribute for the improvement in mechanical properties achievable by selecting and tailoring the suitable heat treatment process. On subjecting to suitable treatments like age hardening or cold deformation assisted heat treatments, known as low temperature thermomechanical treatments (LTMT) the challenging properties might be incorporated. T6 is the age hardening or precipitation hardening process with artificial aging cycle whereas T8 comprises of LTMT treatment aged artificially with X% cold deformation. When the cold deformation is provided after solution treatment, there is increase in hardness related properties such as wear resistance, yield and ultimate strength, toughness with the expense of ductility. During precipitation hardening both hardness and strength of the samples are increasing. Decreasing peak hardness value with increasing aging temperature is the well-known behavior of age hardenable alloys. The peak hardness value is further increasing when room temperature deformation is positively supported with age hardening known as thermomechanical treatment. Considering these aspects, it is intended to perform heat treatment and evaluate hardness, tensile strength, wear resistance and distribution pattern of reinforcement in the matrix. 2 to 2.5 and 3 to 3.5 times increase in hardness is reported in age hardening and LTMT treatments respectively as compared to as-cast composite. There was better distribution of reinforcements in the matrix, nearly two fold increase in strength levels and upto 5 times increase in wear resistance are also observed in the present study.

Keywords: reinforcement, precipitation, thermomechanical, dislocation, strain hardening

Procedia PDF Downloads 296
1183 Radar Cross Section Modelling of Lossy Dielectrics

Authors: Ciara Pienaar, J. W. Odendaal, J. Joubert, J. C. Smit

Abstract:

Radar cross section (RCS) of dielectric objects play an important role in many applications, such as low observability technology development, drone detection, and monitoring as well as coastal surveillance. Various materials are used to construct the targets of interest such as metal, wood, composite materials, radar absorbent materials, and other dielectrics. Since simulated datasets are increasingly being used to supplement infield measurements, as it is more cost effective and a larger variety of targets can be simulated, it is important to have a high level of confidence in the predicted results. Confidence can be attained through validation. Various computational electromagnetic (CEM) methods are capable of predicting the RCS of dielectric targets. This study will extend previous studies by validating full-wave and asymptotic RCS simulations of dielectric targets with measured data. The paper will provide measured RCS data of a number of canonical dielectric targets exhibiting different material properties. As stated previously, these measurements are used to validate numerous CEM methods. The dielectric properties are accurately characterized to reduce the uncertainties in the simulations. Finally, an analysis of the sensitivity of oblique and normal incidence scattering predictions to material characteristics is also presented. In this paper, the ability of several CEM methods, including method of moments (MoM), and physical optics (PO), to calculate the RCS of dielectrics were validated with measured data. A few dielectrics, exhibiting different material properties, were selected and several canonical targets, such as flat plates and cylinders, were manufactured. The RCS of these dielectric targets were measured in a compact range at the University of Pretoria, South Africa, over a frequency range of 2 to 18 GHz and a 360° azimuth angle sweep. This study also investigated the effect of slight variations in the material properties on the calculated RCS results, by varying the material properties within a realistic tolerance range and comparing the calculated RCS results. Interesting measured and simulated results have been obtained. Large discrepancies were observed between the different methods as well as the measured data. It was also observed that the accuracy of the RCS data of the dielectrics can be frequency and angle dependent. The simulated RCS for some of these materials also exhibit high sensitivity to variations in the material properties. Comparison graphs between the measured and simulation RCS datasets will be presented and the validation thereof will be discussed. Finally, the effect that small tolerances in the material properties have on the calculated RCS results will be shown. Thus the importance of accurate dielectric material properties for validation purposes will be discussed.

Keywords: asymptotic, CEM, dielectric scattering, full-wave, measurements, radar cross section, validation

Procedia PDF Downloads 223
1182 Production of Recombinant Human Serum Albumin in Escherichia coli: A Crucial Biomolecule for Biotechnological and Healthcare Applications

Authors: Ashima Sharma, Tapan K. Chaudhuri

Abstract:

Human Serum Albumin (HSA) is one of the most demanded therapeutic protein with immense biotechnological applications. The current source of HSA is human blood plasma. Blood is a limited and an unsafe source as it possesses the risk of contamination by various blood derived pathogens. This issue led to exploitation of various hosts with the aim to obtain an alternative source for the production of the rHSA. But, till now no host has been proven to be effective commercially for rHSA production because of their respective limitations. Thus, there exists an indispensable need to promote non-animal derived rHSA production. Of all the host systems, Escherichia coli is one of the most convenient hosts which has contributed in the production of more than 30% of the FDA approved recombinant pharmaceuticals. E. coli grows rapidly and its culture reaches high cell density using inexpensive and simple substrates. The fermentation batch turnaround number for E. coli culture is 300 per year, which is far greater than any of the host systems available. Therefore, E. coli derived recombinant products have more economical potential as fermentation processes are cheaper compared to the other expression hosts available. Despite of all the mentioned advantages, E. coli had not been successfully adopted as a host for rHSA production. The major bottleneck in exploiting E. coli as a host for rHSA production was aggregation i.e. majority of the expressed recombinant protein was forming inclusion bodies (more than 90% of the total expressed rHSA) in the E. coli cytosol. Recovery of functional rHSA form inclusion body is not preferred because it is tedious, time consuming, laborious and expensive. Because of this limitation, E. coli host system was neglected for rHSA production for last few decades. Considering the advantages of E. coli as a host, the present work has targeted E. coli as an alternate host for rHSA production through resolving the major issue of inclusion body formation associated with it. In the present study, we have developed a novel and innovative method for enhanced soluble and functional production of rHSA in E.coli (~60% of the total expressed rHSA in the soluble fraction) through modulation of the cellular growth, folding and environmental parameters, thereby leading to significantly improved and enhanced -expression levels as well as the functional and soluble proportion of the total expressed rHSA in the cytosolic fraction of the host. Therefore, in the present case we have filled in the gap in the literature, by exploiting the most well studied host system Escherichia coli which is of low cost, fast growing, scalable and ‘yet neglected’, for the enhancement of functional production of HSA- one of the most crucial biomolecule for clinical and biotechnological applications.

Keywords: enhanced functional production of rHSA in E. coli, recombinant human serum albumin, recombinant protein expression, recombinant protein processing

Procedia PDF Downloads 331
1181 CO2 Capture in Porous Silica Assisted by Lithium

Authors: Lucero Gonzalez, Salvador Alfaro

Abstract:

Carbon dioxide (CO2) and methane (CH4) are considered as the compounds with higher abundance among the greenhouse gases (CO2, NOx, SOx, CxHx, etc.), due to its higher concentration, this two gases have a greater impact in the environment pollution and provokes global warming. So, recovery, disposal and subsequent reuse, are of great interest, especially from the ecological and health perspective. By one hand, porous inorganic materials are good candidates to capture gases, because these type of materials are higher stability from the point view of thermal, chemical and mechanical under adsorption gas processes. By another hand, during the design and the synthetic preparation of the porous materials is possible add other intrinsic properties (physicochemical and structural) by adding chemical compounds as dopants or using structured directed agents or surfactants to improve the porous structure, the above features allow to have alternative materials for separation, capture and storage of greenhouse gases. In this work, ordered mesoporous materials base silica were prepared using Surfynol as surfactant. The surfactant micelles are commonly used as self-assembly templates for the development of new structure porous silica’s, adding a variety of textures and structures. By another hand, the Surfynol is a commercial surfactant, is non-ionic, for that is necessary determine its critical micelles concentration (cmc) by the pyrene I1/I3 ratio method, before to prepare silica particles. One time known the CMC, a precursor gel was prepared via sol-gel process at room temperature using TEOS as silica precursor, NH4OH as catalyst, Surfynol as template and H2O as solvent. Then, the gel precursor was treatment hydrothermally in a Teflon-lined stainless steel autoclave with a volume of 100 mL and kept at 100 ºC for 24 h under static conditions in a convection oven. After that, the porous silica particles obtained were impregnated with lithium to improve the CO2 adsorption capacity. Then the silica particles were characterized physicochemical, morphology and structurally, by XRD, FTIR, BET and SEM techniques. The thermal stability and the CO2 adsorption capacity was evaluated by thermogravimetric analysis (TGA). According the results, we found that the Surfynol is a good candidate to prepare silica particles with an ordered structure. Also the TGA analysis shown that the particles has a good thermal stability in the range of 250 °C and 800 °C. The best materials had, the capacity to adsorbing 70 and 90 mg per gram of silica particles and its CO2 adsorption capacity depends on the way to thermal pretreatment of the porous silica before of the adsorption experiments and of the concentration of surfactant used during the synthesis of silica particles. Acknowledgments: This work was supported by SIP-IPN through project SIP-20161862.

Keywords: CO2 adsorption, lithium as dopant, porous silica, surfynol as surfactant, thermogravimetric analysis

Procedia PDF Downloads 248
1180 Epoxomicin Affects Proliferating Neural Progenitor Cells of Rat

Authors: Bahaa Eldin A. Fouda, Khaled N. Yossef, Mohamed Elhosseny, Ahmed Lotfy, Mohamed Salama, Mohamed Sobh

Abstract:

Developmental neurotoxicity (DNT) entails the toxic effects imparted by various chemicals on the brain during the early childhood period. As human brains are vulnerable during this period, various chemicals would have their maximum effects on brains during early childhood. Some toxicants have been confirmed to induce developmental toxic effects on CNS e.g. lead, however; most of the agents cannot be identified with certainty due the defective nature of predictive toxicology models used. A novel alternative method that can overcome most of the limitations of conventional techniques is the use of 3D neurospheres system. This in-vitro system can recapitulate most of the changes during the period of brain development making it an ideal model for predicting neurotoxic effects. In the present study, we verified the possible DNT of epoxomicin which is a naturally occurring selective proteasome inhibitor with anti-inflammatory activity. Rat neural progenitor cells were isolated from rat embryos (E14) extracted from placental tissue. The cortices were aseptically dissected out from the brains of the fetuses and the tissues were triturated by repeated passage through a fire-polished constricted Pasteur pipette. The dispersed tissues were allowed to settle for 3 min. The supernatant was, then, transferred to a fresh tube and centrifuged at 1,000 g for 5 min. The pellet was placed in Hank’s balanced salt solution cultured as free-floating neurospheres in proliferation medium. Two doses of epoxomicin (1µM and 10µM) were used in cultured neuropsheres for a period of 14 days. For proliferation analysis, spheres were cultured in proliferation medium. After 0, 4, 5, 11, and 14 days, sphere size was determined by software analyses. The diameter of each neurosphere was measured and exported to excel file further to statistical analysis. For viability analysis, trypsin-EDTA solution were added to neurospheres for 3 min to dissociate them into single cells suspension, then viability evaluated by the Trypan Blue exclusion test. Epoxomicin was found to affect proliferation and viability of neuropsheres, these effects were positively correlated to doses and progress of time. This study confirms the DNT effects of epoxomicin on 3D neurospheres model. The effects on proliferation suggest possible gross morphologic changes while the decrease in viability propose possible focal lesion on exposure to epoxomicin during early childhood.

Keywords: neural progentor cells, epoxomicin, neurosphere, medical and health sciences

Procedia PDF Downloads 405
1179 Study of Chemical and Physical - Mechanical Properties Lime Mortar with Addition of Natural Resins

Authors: I. Poot-Ocejo, H. Silva-Poot, J. C. Cruz, A. Yeladaqui-Tello

Abstract:

Mexico has remarkable archaeological remains mainly in the Maya area, which are critical to the preservation of our cultural heritage, so the authorities have an interest in preserving and restoring these vestiges of the most original way, by employing techniques traditional, which has advantages such as compatibility, durability, strength, uniformity and chemical composition. Recent studies have confirmed the addition of natural resins extracted from the bark of trees, of which Brosium alicastrum (Ramon) has been the most evaluated, besides being one of the most abundant species in the vicinity of the archaeological sites, like that Manilkara Zapota (Chicozapote). Therefore, the objective is to determine if these resins are capable of being employed in archaeological restoration. This study shows the results of the chemical composition and physical-mechanical behavior of mortar mixtures eight made with commercial lime and off by hand, calcium sand, resins added with Brosium alicastrum (Ramon) and Manilkara zapota (Chicozapote), where determined and quantified properties and chemical composition of the resins by X-Ray Fluorescence (XRF), the pH of the material was determined, indicating that both resins are acidic (3.78 and 4.02), and the addition rate maximum was obtained from resins in water by means of ultrasonic baths pulses, being in the case of 10% Manilkara zapota, because it contains up to 40% rubber and for 40% alicastrum Brosium contain less rubber. Through quantitative methodology, the compressive strength 96 specimens of 5 cm x 5 cm x 5 cm of mortar binding, 72 with partial substitution of water mixed with natural resins in proportions 5 to 10% in the case was evaluated of Manilkara Zapota, for Brosium alicastrum 20 and 40%, and 12 artificial resin and 12 without additive (mortars witnesses). 24 specimens likewise glued brick with mortar, for testing shear adhesion was found where, then the microstructure more conducive additions was determined by SEM analysis were prepared sweep. The test results indicate that the addition Manilkara zapota resin in the proportion of 10% 1.5% increase in compressive strength and 1% with respect to adhesion, compared to the control without addition mortar; In the case of Brosium alicastrum results show that compressive strengths and adhesion were insignificant compared to those made with registered by Manilkara zapota mixtures. Mortars containing the natural resins have improvements in physical properties and increase the mechanical strength and adhesion, compared to those who do not, in addition to the components are chemically compatible, therefore have considered that can be employed in Archaeological restoration.

Keywords: lime, mortar, natural resins, Manilkara zapota mixtures, Brosium alicastrum

Procedia PDF Downloads 354
1178 Effect of Starch and Plasticizer Types and Fiber Content on Properties of Polylactic Acid/Thermoplastic Starch Blend

Authors: Rangrong Yoksan, Amporn Sane, Nattaporn Khanoonkon, Chanakorn Yokesahachart, Narumol Noivoil, Khanh Minh Dang

Abstract:

Polylactic acid (PLA) is the most commercially available bio-based and biodegradable plastic at present. PLA has been used in plastic related industries including single-used containers, disposable and environmentally friendly packaging owing to its renewability, compostability, biodegradability, and safety. Although PLA demonstrates reasonably good optical, physical, mechanical, and barrier properties comparable to the existing petroleum-based plastics, its brittleness and mold shrinkage as well as its price are the points to be concerned for the production of rigid and semi-rigid packaging. Blending PLA with other bio-based polymers including thermoplastic starch (TPS) is an alternative not only to achieve a complete bio-based plastic, but also to reduce the brittleness, shrinkage during molding and production cost of the PLA-based products. TPS is a material produced mainly from starch which is cheap, renewable, biodegradable, compostable, and non-toxic. It is commonly prepared by a plasticization of starch under applying heat and shear force. Although glycerol has been reported as one of the most plasticizers used for preparing TPS, its migration caused the surface stickiness of the TPS products. In some cases, mixed plasticizers or natural fibers have been applied to impede the retrogradation of starch or reduce the migration of glycerol. The introduction of fibers into TPS-based materials could reinforce the polymer matrix as well. Therefore, the objective of the present research is to study the effect of starch type (i.e. native starch and phosphate starch), plasticizer type (i.e. glycerol and xylitol with a weight ratio of glycerol to xylitol of 100:0, 75:25, 50:50, 25:75, and 0:100), and fiber content (i.e. in the range of 1-25 % wt) on properties of PLA/TPS blend and composite. PLA/TPS blends and composites were prepared using a twin-screw extruder and then converted into dumbbell-shaped specimens using an injection molding machine. The PLA/TPS blends prepared by using phosphate starch showed higher tensile strength and stiffness than the blends prepared by using the native one. In contrast, the blends from native starch exhibited higher extensibility and heat distortion temperature (HDT) than those from the modified starch. Increasing xylitol content resulted in enhanced tensile strength, stiffness, and water resistance, but decreased extensibility and HDT of the PLA/TPS blend. Tensile properties and hydrophobicity of the blend could be improved by incorporating silane treated-jute fibers.

Keywords: polylactic acid, thermoplastic starch, Jute fiber, composite, blend

Procedia PDF Downloads 408
1177 Assessment of Hydrologic Response of a Naturalized Tropical Coastal Mangrove Ecosystem Due to Land Cover Change in an Urban Watershed

Authors: Bryan Clark B. Hernandez, Eugene C. Herrera, Kazuo Nadaoka

Abstract:

Mangrove forests thriving in intertidal zones in tropical and subtropical regions of the world offer a range of ecosystem services including carbon storage and sequestration. They can regulate the detrimental effects of climate change due to carbon releases two to four times greater than that of mature tropical rainforests. Moreover, they are effective natural defenses against storm surges and tsunamis. However, their proliferation depends significantly on the prevailing hydroperiod at the coast. In the Philippines, these coastal ecosystems have been severely threatened with a 50% decline in areal extent observed from 1918 to 2010. The highest decline occurred in 1950 - 1972 when national policies encouraged the development of fisheries and aquaculture. With the intensive land use conversion upstream, changes in the freshwater-saltwater envelope at the coast may considerably impact mangrove growth conditions. This study investigates a developing urban watershed in Kalibo, Aklan province with a 220-hectare mangrove forest replanted for over 30 years from coastal mudflats. Since then, the mangrove forest was sustainably conserved and declared as protected areas. Hybrid land cover classification technique was used to classify Landsat images for years, 1990, 2010, and 2017. Digital elevation model utilized was Interferometric Synthetic Aperture Radar (IFSAR) with a 5-meter resolution to delineate the watersheds. Using numerical modelling techniques, the hydrologic and hydraulic analysis of the influence of land cover change to flow and sediment dynamics was simulated. While significant land cover change occurred upland, thereby increasing runoff and sediment loads, the mangrove forests abundance adjacent to the coasts for the urban watershed, was somehow sustained. However, significant alteration of the coastline was observed in Kalibo through the years, probably due to the massive land-use conversion upstream and significant replanting of mangroves downstream. Understanding the hydrologic-hydraulic response of these watersheds to change land cover is essential to helping local government and stakeholders facilitate better management of these mangrove ecosystems.

Keywords: coastal mangroves, hydrologic model, land cover change, Philippines

Procedia PDF Downloads 108
1176 Achieving Net Zero Energy Building in a Hot Climate Using Integrated Photovoltaic and Parabolic Trough Collectors

Authors: Adel A. Ghoneim

Abstract:

In most existing buildings in hot climate, cooling loads lead to high primary energy consumption and consequently high CO2 emissions. These can be substantially decreased with integrated renewable energy systems. Kuwait is characterized by its dry hot long summer and short warm winter. Kuwait receives annual total radiation more than 5280 MJ/m2 with approximately 3347 h of sunshine. Solar energy systems consist of PV modules and parabolic trough collectors are considered to satisfy electricity consumption, domestic water heating, and cooling loads of an existing building. This paper presents the results of an extensive program of energy conservation and energy generation using integrated photovoltaic (PV) modules and parabolic trough collectors (PTC). The program conducted on an existing institutional building intending to convert it into a Net-Zero Energy Building (NZEB) or near net Zero Energy Building (nNZEB). The program consists of two phases; the first phase is concerned with energy auditing and energy conservation measures at minimum cost and the second phase considers the installation of photovoltaic modules and parabolic trough collectors. The 2-storey building under consideration is the Applied Sciences Department at the College of Technological Studies, Kuwait. Single effect lithium bromide water absorption chillers are implemented to provide air conditioning load to the building. A numerical model is developed to evaluate the performance of parabolic trough collectors in Kuwait climate. Transient simulation program (TRNSYS) is adapted to simulate the performance of different solar system components. In addition, a numerical model is developed to assess the environmental impacts of building integrated renewable energy systems. Results indicate that efficient energy conservation can play an important role in converting the existing buildings into NZEBs as it saves a significant portion of annual energy consumption of the building. The first phase results in an energy conservation of about 28% of the building consumption. In the second phase, the integrated PV completely covers the lighting and equipment loads of the building. On the other hand, parabolic trough collectors of optimum area of 765 m2 can satisfy a significant portion of the cooling load, i.e about73% of the total building cooling load. The annual avoided CO2 emission is evaluated at the optimum conditions to assess the environmental impacts of renewable energy systems. The total annual avoided CO2 emission is about 680 metric ton/year which confirms the environmental impacts of these systems in Kuwait.

Keywords: building integrated renewable systems, Net-Zero energy building, solar fraction, avoided CO2 emission

Procedia PDF Downloads 588
1175 Refractory Cardiac Arrest: Do We Go beyond, Do We Increase the Organ Donation Pool or Both?

Authors: Ortega Ivan, De La Plaza Edurne

Abstract:

Background: Spain and other European countries have implemented Uncontrolled Donation after Cardiac Death (uDCD) programs. After 15 years of experience in Spain, many things have changed. Recent evidence and technical breakthroughs achieved in resuscitation are relevant for uDCD programs and raise some ethical concerns related to these protocols. Aim: To rethink current uDCD programs in the light of recent evidence on available therapeutic procedures applicable to victims of out-of-hospital cardiac arrest (OHCA). To address the following question: What is the current standard of treatment owed to victims of OHCA before including them in an uDCD protocol? Materials and Methods: Review of the scientific and ethical literature related to both uDCD programs and innovative resuscitation techniques. Results: 1) The standard of treatment received and the chances of survival of victims of OHCA depend on whether they are classified as Non-Heart Beating Patients (NHBP) or Non-Heart-Beating-Donors (NHBD). 2) Recent studies suggest that NHBPs are likely to survive, with good quality of life, if one or more of the following interventions are performed while ongoing CPR -guided by suspected or known cause of OHCA- is maintained: a) direct access to a Cath Lab-H24 or/and to extra-corporeal life support (ECLS); b) transfer in induced hypothermia from the Emergency Medical Service (EMS) to the ICU; c) thrombolysis treatment; d) mobile extra-corporeal membrane oxygenation (mini ECMO) instituted as a bridge to ICU ECLS devices. 3) Victims of OHCA who cannot benefit from any of these therapies should be considered as NHBDs. Conclusion: Current uDCD protocols do not take into account recent improvements in resuscitation and need to be adapted. Operational criteria to distinguish NHBDs from NHBP should seek a balance between the technical imperative (to do whatever is possible), considerations about expected survival with quality of life, and distributive justice (costs/benefits). Uncontrolled DCD protocols can be performed in a way that does not hamper the legitimate interests of patients, potential organ donors, their families, the organ recipients, and the health professionals involved in these processes. Families of NHBDs’ should receive information which conforms to the ethical principles of respect of autonomy and transparency.

Keywords: uncontrolled donation after cardiac death resuscitation, refractory cardiac arrest, out of hospital cardiac, arrest ethics

Procedia PDF Downloads 219
1174 Hybrid Precoder Design Based on Iterative Hard Thresholding Algorithm for Millimeter Wave Multiple-Input-Multiple-Output Systems

Authors: Ameni Mejri, Moufida Hajjaj, Salem Hasnaoui, Ridha Bouallegue

Abstract:

The technology advances have most lately made the millimeter wave (mmWave) communication possible. Due to the huge amount of spectrum that is available in MmWave frequency bands, this promising candidate is considered as a key technology for the deployment of 5G cellular networks. In order to enhance system capacity and achieve spectral efficiency, very large antenna arrays are employed at mmWave systems by exploiting array gain. However, it has been shown that conventional beamforming strategies are not suitable for mmWave hardware implementation. Therefore, new features are required for mmWave cellular applications. Unlike traditional multiple-input-multiple-output (MIMO) systems for which only digital precoders are essential to accomplish precoding, MIMO technology seems to be different at mmWave because of digital precoding limitations. Moreover, precoding implements a greater number of radio frequency (RF) chains supporting more signal mixers and analog-to-digital converters. As RF chain cost and power consumption is increasing, we need to resort to another alternative. Although the hybrid precoding architecture has been regarded as the best solution based on a combination between a baseband precoder and an RF precoder, we still do not get the optimal design of hybrid precoders. According to the mapping strategies from RF chains to the different antenna elements, there are two main categories of hybrid precoding architecture. Given as a hybrid precoding sub-array architecture, the partially-connected structure reduces hardware complexity by using a less number of phase shifters, whereas it sacrifices some beamforming gain. In this paper, we treat the hybrid precoder design in mmWave MIMO systems as a problem of matrix factorization. Thus, we adopt the alternating minimization principle in order to solve the design problem. Further, we present our proposed algorithm for the partially-connected structure, which is based on the iterative hard thresholding method. Through simulation results, we show that our hybrid precoding algorithm provides significant performance gains over existing algorithms. We also show that the proposed approach reduces significantly the computational complexity. Furthermore, valuable design insights are provided when we use the proposed algorithm to make simulation comparisons between the hybrid precoding partially-connected structure and the fully-connected structure.

Keywords: alternating minimization, hybrid precoding, iterative hard thresholding, low-complexity, millimeter wave communication, partially-connected structure

Procedia PDF Downloads 303
1173 Predicting Polyethylene Processing Properties Based on Reaction Conditions via a Coupled Kinetic, Stochastic and Rheological Modelling Approach

Authors: Kristina Pflug, Markus Busch

Abstract:

Being able to predict polymer properties and processing behavior based on the applied operating reaction conditions in one of the key challenges in modern polymer reaction engineering. Especially, for cost-intensive processes such as the high-pressure polymerization of low-density polyethylene (LDPE) with high safety-requirements, the need for simulation-based process optimization and product design is high. A multi-scale modelling approach was set-up and validated via a series of high-pressure mini-plant autoclave reactor experiments. The approach starts with the numerical modelling of the complex reaction network of the LDPE polymerization taking into consideration the actual reaction conditions. While this gives average product properties, the complex polymeric microstructure including random short- and long-chain branching is calculated via a hybrid Monte Carlo-approach. Finally, the processing behavior of LDPE -its melt flow behavior- is determined in dependence of the previously determined polymeric microstructure using the branch on branch algorithm for randomly branched polymer systems. All three steps of the multi-scale modelling approach can be independently validated against analytical data. A triple-detector GPC containing an IR, viscosimetry and multi-angle light scattering detector is applied. It serves to determine molecular weight distributions as well as chain-length dependent short- and long-chain branching frequencies. 13C-NMR measurements give average branching frequencies, and rheological measurements in shear and extension serve to characterize the polymeric flow behavior. The accordance of experimental and modelled results was found to be extraordinary, especially taking into consideration that the applied multi-scale modelling approach does not contain parameter fitting of the data. This validates the suggested approach and proves its universality at the same time. In the next step, the modelling approach can be applied to other reactor types, such as tubular reactors or industrial scale. Moreover, sensitivity analysis for systematically varying process conditions is easily feasible. The developed multi-scale modelling approach finally gives the opportunity to predict and design LDPE processing behavior simply based on process conditions such as feed streams and inlet temperatures and pressures.

Keywords: low-density polyethylene, multi-scale modelling, polymer properties, reaction engineering, rheology

Procedia PDF Downloads 111
1172 A First-Principles Investigation of Magnesium-Hydrogen System: From Bulk to Nano

Authors: Paramita Banerjee, K. R. S. Chandrakumar, G. P. Das

Abstract:

Bulk MgH2 has drawn much attention for the purpose of hydrogen storage because of its high hydrogen storage capacity (~7.7 wt %) as well as low cost and abundant availability. However, its practical usage has been hindered because of its high hydrogen desorption enthalpy (~0.8 eV/H2 molecule), which results in an undesirable desorption temperature of 3000C at 1 bar H2 pressure. To surmount the limitations of bulk MgH2 for the purpose of hydrogen storage, a detailed first-principles density functional theory (DFT) based study on the structure and stability of neutral (Mgm) and positively charged (Mgm+) Mg nanoclusters of different sizes (m = 2, 4, 8 and 12), as well as their interaction with molecular hydrogen (H2), is reported here. It has been found that due to the absence of d-electrons within the Mg atoms, hydrogen remained in molecular form even after its interaction with neutral and charged Mg nanoclusters. Interestingly, the H2 molecules do not enter into the interstitial positions of the nanoclusters. Rather, they remain on the surface by ornamenting these nanoclusters and forming new structures with a gravimetric density higher than 15 wt %. Our observation is that the inclusion of Grimme’s DFT-D3 dispersion correction in this weakly interacting system has a significant effect on binding of the H2 molecules with these nanoclusters. The dispersion corrected interaction energy (IE) values (0.1-0.14 eV/H2 molecule) fall in the right energy window, that is ideal for hydrogen storage. These IE values are further verified by using high-level coupled-cluster calculations with non-iterative triples corrections i.e. CCSD(T), (which has been considered to be a highly accurate quantum chemical method) and thereby confirming the accuracy of our ‘dispersion correction’ incorporated DFT calculations. The significance of the polarization and dispersion energy in binding of the H2 molecules are confirmed by performing energy decomposition analysis (EDA). A total of 16, 24, 32 and 36 H2 molecules can be attached to the neutral and charged nanoclusters of size m = 2, 4, 8 and 12 respectively. Ab-initio molecular dynamics (AIMD) simulation shows that the outermost H2 molecules are desorbed at a rather low temperature viz. 150 K (-1230C) which is expected. However, complete dehydrogenation of these nanoclusters occur at around 1000C. Most importantly, the host nanoclusters remain stable up to ~500 K (2270C). All these results on the adsorption and desorption of molecular hydrogen with neutral and charged Mg nanocluster systems indicate towards the possibility of reducing the dehydrogenation temperature of bulk MgH2 by designing new Mg-based nano materials which will be able to adsorb molecular hydrogen via this weak Mg-H2 interaction, rather than the strong Mg-H bonding. Notwithstanding the fact that in practical applications, these interactions will be further complicated by the effect of substrates as well as interactions with other clusters, the present study has implications on our fundamental understanding to this problem.

Keywords: density functional theory, DFT, hydrogen storage, molecular dynamics, molecular hydrogen adsorption, nanoclusters, physisorption

Procedia PDF Downloads 405
1171 Analysis of Eco-Efficiency and the Determinants of Family Agriculture in Southeast Spain

Authors: Emilio Galdeano-Gómez, Ángeles Godoy-Durán, Juan C. Pérez-Mesa, Laura Piedra-Muñoz

Abstract:

Eco-efficiency is receiving ever-increasing interest as an indicator of sustainability, as it links environmental and economic performances in productive activities. In agriculture, these indicators and their determinants prove relevant due to the close relationships in this activity between the use of natural resources, which is generally limited, and the provision of basic goods to society. In this context, various analyses have focused on eco-efficiency by considering individual family farms as the basic production unit. However, not only must the measure of efficiency be taken into account, but also the existence of a series of factors which constitute socio-economic, political-institutional, and environmental determinants. Said factors have been studied to a lesser extent in the literature. The present work analyzes eco-efficiency at a micro level, focusing on small-scale family farms as the main decision-making units in horticulture in southeast Spain, a sector which represents about 30% of the fresh vegetables produced in the country and about 20% of those consumed in Europe. The objectives of this study are a) to obtain a series of eco-efficiency indicators by estimating several pressure ratios and economic value added in farming, b) to analyze the influence of specific social, economic and environmental variables on the aforementioned eco-efficiency indicators. The present work applies the method of Data Envelopment Analysis (DEA), which calculates different combinations of environmental pressures (water usage, phytosanitary contamination, waste management, etc.) and aggregate economic value. In a second stage, an analysis is conducted on the influence of the socio-economic and environmental characteristics of family farms on the eco-efficiency indicators, as endogeneous variables, through the use of truncated regression and bootstrapping techniques, following Simar-Wilson methodology. The results reveal considerable inefficiency in aspects such as waste management, while there is relatively little inefficiency in water usage and nitrogen balance. On the other hand, characteristics, such as product specialization, the adoption of quality certifications and belonging to a cooperative do have a positive impact on eco-efficiency. These results are deemed to be of interest to agri-food systems structured on small-scale producers, and they may prove useful to policy-makers as regards managing public environmental programs in agriculture.

Keywords: data envelopment analysis, eco-efficiency, family farms, horticulture, socioeconomic features

Procedia PDF Downloads 173
1170 Interventional Radiology Perception among Medical Students

Authors: Shujon Mohammed Alazzam, Sarah Saad Alamer, Omar Hassan Kasule, Lama Suliman Aleid, Mohammad Abdulaziz Alakeel, Boshra Mosleh Alanazi, Abdullah Abdulelah Altowairqi, Yahya Ali Al-Asiri

Abstract:

Background: Interventional radiology (IR) is a specialized field within radiology that diagnose and treat several conditions through a minimally invasive surgical procedure that involves the use of various radiological techniques. In the last few years, the role of IR has expanded to include a variety of organ systems which have been led to an increase in demand for these Specialties. The level of knowledge regarding IR is relatively low in general. In this study, we aimed to investigate the perceptions of interventional radiology (IR) as a specialty among medical students and medical interns in Riyadh, Saudi Arabia. Methodology: This study was a cross section. The target population is medical students in January 2023 in Riyadh city, KSA. We used the questionnaire for face-to-face interviews with voluntary participants to assess their knowledge of Interventional radiology. Permission was taken from participants to use their information. Assuring them that the data in this study was used only for scientific purposes. Results: According to the inclusion criteria, a total of 314 students participated in the study. (49%) of the participants were in the preclinical years, and (51%) were in the clinical years. The findings indicate more than half of the students think that they had good information about IR (58%), while (42%) reported that they had poor information and knowledge about IR. Only (28%) of students were planning to take an elective and radiology rotation, (and 27%) said they would consider a career in IR. (73%) of the participants who would not consider a career in IR, the highest reasons in order were due to "I do not find it interesting" (45%), then "Radiation exposure" (14%). Around half (48%) thought that an IRs must complete a residency training program in both radiology and surgery, and just (36%) of the students believe that an IRs must finish training in radiology. Our data show the procedures performed by IRs that (66%) lower limb angioplasty and stenting (58%) Cardiac angioplasty or stenting. (68%) of the students were familiar with angioplasty. When asked about the source of exposure to angioplasty, the majority (46%) were from a cardiologist, (and 16%) were from the interventional radiologist. Regarding IR career prospects, (78%) of the students believe that IRs have good career prospects. In conclusion, our findings reveal that the perception and exposure to IR among medical students and interns are generally poor. This has a direct influence on the student's decision regarding IR as a career path. Recommendations to attract medical students and promote IR as a career should be increased knowledge among medical students and future physicians through early exposure to IR, and this will promote the specialty's growth; also, involvement of the Saudi Interventional Radiology Society and Radiological Society of Saudi Arabia is essential.

Keywords: knowledge, medical students, perceptions, radiology, interventional radiology, Saudi Arabia

Procedia PDF Downloads 65
1169 Investigations of Effective Marketing Metric Strategies: The Case of St. George Brewery Factory, Ethiopia

Authors: Mekdes Getu Chekol, Biniam Tedros Kahsay, Rahwa Berihu Haile

Abstract:

The main objective of this study is to investigate the marketing strategy practice in the Case of St. George Brewery Factory in Addis Ababa. One of the core activities in a Business Company to stay in business is having a well-developed marketing strategy. It assessed how the marketing strategies were practiced in the company to achieve its goals aligned with segmentation, target market, positioning, and the marketing mix elements to satisfy customer requirements. Using primary and secondary data, the study is conducted by using both qualitative and quantitative approaches. The primary data was collected through open and closed-ended questionnaires. Considering the size of the population is small, the selection of the respondents was carried out by using a census. The finding shows that the company used all the 4 Ps of the marketing mix elements in its marketing strategies and provided quality products at affordable prices by promoting its products by using high and effective advertising mechanisms. The product availability and accessibility are admirable with the practices of both direct and indirect distribution channels. On the other hand, the company has identified its target customers, and the company’s market segmentation practice is geographical location. Communication effectiveness between the marketing department and other departments is very good. The adjusted R2 model explains 61.6% of the marketing strategy practice variance by product, price, promotion, and place. The remaining 38.4% of variation in the dependent variable was explained by other factors not included in this study. The result reveals that all four independent variables, product, price, promotion, and place, have a positive beta sign, proving that predictor variables have a positive effect on that of the predicting dependent variable marketing strategy practice. Even though the marketing strategies of the company are effectively practiced, there are some problems that the company faces while implementing them. These are infrastructure problems, economic problems, intensive competition in the market, shortage of raw materials, seasonality of consumption, socio-cultural problems, and the time and cost of awareness creation for the customers. Finally, the authors suggest that the company better develop a long-range view and try to implement a more structured approach to attain information about potential customers, competitor’s actions, and market intelligence within the industry. In addition, we recommend conducting the study by increasing the sample size and including different marketing factors.

Keywords: marketing strategy, market segmentation, target marketing, market positioning, marketing mix

Procedia PDF Downloads 33
1168 Investigations into the in situ Enterococcus faecalis Biofilm Removal Efficacies of Passive and Active Sodium Hypochlorite Irrigant Delivered into Lateral Canal of a Simulated Root Canal Model

Authors: Saifalarab A. Mohmmed, Morgana E. Vianna, Jonathan C. Knowles

Abstract:

The issue of apical periodontitis has received considerable critical attention. Bacteria is integrated into communities, attached to surfaces and consequently form biofilm. The biofilm structure provides bacteria with a series protection skills against, antimicrobial agents and enhances pathogenicity (e.g. apical periodontitis). Sodium hypochlorite (NaOCl) has become the irrigant of choice for elimination of bacteria from the root canal system based on its antimicrobial findings. The aim of the study was to investigate the effect of different agitation techniques on the efficacy of 2.5% NaOCl to eliminate the biofilm from the surface of the lateral canal using the residual biofilm, and removal rate of biofilm as outcome measures. The effect of canal complexity (lateral canal) on the efficacy of the irrigation procedure was also assessed. Forty root canal models (n = 10 per group) were manufactured using 3D printing and resin materials. Each model consisted of two halves of an 18 mm length root canal with apical size 30 and taper 0.06, and a lateral canal of 3 mm length, 0.3 mm diameter located at 3 mm from the apical terminus. E. faecalis biofilms were grown on the apical 3 mm and lateral canal of the models for 10 days in Brain Heart Infusion broth. Biofilms were stained using crystal violet for visualisation. The model halves were reassembled, attached to an apparatus and tested under a fluorescence microscope. Syringe and needle irrigation protocol was performed using 9 mL of 2.5% NaOCl irrigant for 60 seconds. The irrigant was either left stagnant in the canal or activated for 30 seconds using manual (gutta-percha), sonic and ultrasonic methods. Images were then captured every second using an external camera. The percentages of residual biofilm were measured using image analysis software. The data were analysed using generalised linear mixed models. The greatest removal was associated with the ultrasonic group (66.76%) followed by sonic (45.49%), manual (43.97%), and passive irrigation group (control) (38.67%) respectively. No marked reduction in the efficiency of NaOCl to remove biofilm was found between the simple and complex anatomy models (p = 0.098). The removal efficacy of NaOCl on the biofilm was limited to the 1 mm level of the lateral canal. The agitation of NaOCl results in better penetration of the irrigant into the lateral canals. Ultrasonic agitation of NaOCl improved the removal of bacterial biofilm.

Keywords: 3D printing, biofilm, root canal irrigation, sodium hypochlorite

Procedia PDF Downloads 211
1167 An Analysis of LoRa Networks for Rainforest Monitoring

Authors: Rafael Castilho Carvalho, Edjair de Souza Mota

Abstract:

As the largest contributor to the biogeochemical functioning of the Earth system, the Amazon Rainforest has the greatest biodiversity on the planet, harboring about 15% of all the world's flora. Recognition and preservation are the focus of research that seeks to mitigate drastic changes, especially anthropic ones, which irreversibly affect this biome. Functional and low-cost monitoring alternatives to reduce these impacts are a priority, such as those using technologies such as Low Power Wide Area Networks (LPWAN). Promising, reliable, secure and with low energy consumption, LPWAN can connect thousands of IoT devices, and in particular, LoRa is considered one of the most successful solutions to facilitate forest monitoring applications. Despite this, the forest environment, in particular the Amazon Rainforest, is a challenge for these technologies, requiring work to identify and validate the use of technology in a real environment. To investigate the feasibility of deploying LPWAN in remote water quality monitoring of rivers in the Amazon Region, a LoRa-based test bed consisting of a Lora transmitter and a LoRa receiver was set up, both parts were implemented with Arduino and the LoRa chip SX1276. The experiment was carried out at the Federal University of Amazonas, which contains one of the largest urban forests in Brazil. There are several springs inside the forest, and the main goal is to collect water quality parameters and transmit the data through the forest in real time to the gateway at the uni. In all, there are nine water quality parameters of interest. Even with a high collection frequency, the amount of information that must be sent to the gateway is small. However, for this application, the battery of the transmitter device is a concern since, in the real application, the device must run without maintenance for long periods of time. With these constraints in mind, parameters such as Spreading Factor (SF) and Coding Rate (CR), different antenna heights, and distances were tuned to better the connectivity quality, measured with RSSI and loss rate. A handheld spectrum analyzer RF Explorer was used to get the RSSI values. Distances exceeding 200 m have soon proven difficult to establish communication due to the dense foliage and high humidity. The optimal combinations of SF-CR values were 8-5 and 9-5, showing the lowest packet loss rates, 5% and 17%, respectively, with a signal strength of approximately -120 dBm, these being the best settings for this study so far. The rains and climate changes imposed limitations on the equipment, and more tests are already being conducted. Subsequently, the range of the LoRa configuration must be extended using a mesh topology, especially because at least three different collection points in the same water body are required.

Keywords: IoT, LPWAN, LoRa, coverage, loss rate, forest

Procedia PDF Downloads 65
1166 Prediction of Sepsis Illness from Patients Vital Signs Using Long Short-Term Memory Network and Dynamic Analysis

Authors: Marcio Freire Cruz, Naoaki Ono, Shigehiko Kanaya, Carlos Arthur Mattos Teixeira Cavalcante

Abstract:

The systems that record patient care information, known as Electronic Medical Record (EMR) and those that monitor vital signs of patients, such as heart rate, body temperature, and blood pressure have been extremely valuable for the effectiveness of the patient’s treatment. Several kinds of research have been using data from EMRs and vital signs of patients to predict illnesses. Among them, we highlight those that intend to predict, classify, or, at least identify patterns, of sepsis illness in patients under vital signs monitoring. Sepsis is an organic dysfunction caused by a dysregulated patient's response to an infection that affects millions of people worldwide. Early detection of sepsis is expected to provide a significant improvement in its treatment. Preceding works usually combined medical, statistical, mathematical and computational models to develop detection methods for early prediction, getting higher accuracies, and using the smallest number of variables. Among other techniques, we could find researches using survival analysis, specialist systems, machine learning and deep learning that reached great results. In our research, patients are modeled as points moving each hour in an n-dimensional space where n is the number of vital signs (variables). These points can reach a sepsis target point after some time. For now, the sepsis target point was calculated using the median of all patients’ variables on the sepsis onset. From these points, we calculate for each hour the position vector, the first derivative (velocity vector) and the second derivative (acceleration vector) of the variables to evaluate their behavior. And we construct a prediction model based on a Long Short-Term Memory (LSTM) Network, including these derivatives as explanatory variables. The accuracy of the prediction 6 hours before the time of sepsis, considering only the vital signs reached 83.24% and by including the vectors position, speed, and acceleration, we obtained 94.96%. The data are being collected from Medical Information Mart for Intensive Care (MIMIC) Database, a public database that contains vital signs, laboratory test results, observations, notes, and so on, from more than 60.000 patients.

Keywords: dynamic analysis, long short-term memory, prediction, sepsis

Procedia PDF Downloads 104
1165 Enhancing Fault Detection in Rotating Machinery Using Wiener-CNN Method

Authors: Mohamad R. Moshtagh, Ahmad Bagheri

Abstract:

Accurate fault detection in rotating machinery is of utmost importance to ensure optimal performance and prevent costly downtime in industrial applications. This study presents a robust fault detection system based on vibration data collected from rotating gears under various operating conditions. The considered scenarios include: (1) both gears being healthy, (2) one healthy gear and one faulty gear, and (3) introducing an imbalanced condition to a healthy gear. Vibration data was acquired using a Hentek 1008 device and stored in a CSV file. Python code implemented in the Spider environment was used for data preprocessing and analysis. Winner features were extracted using the Wiener feature selection method. These features were then employed in multiple machine learning algorithms, including Convolutional Neural Networks (CNN), Multilayer Perceptron (MLP), K-Nearest Neighbors (KNN), and Random Forest, to evaluate their performance in detecting and classifying faults in both the training and validation datasets. The comparative analysis of the methods revealed the superior performance of the Wiener-CNN approach. The Wiener-CNN method achieved a remarkable accuracy of 100% for both the two-class (healthy gear and faulty gear) and three-class (healthy gear, faulty gear, and imbalanced) scenarios in the training and validation datasets. In contrast, the other methods exhibited varying levels of accuracy. The Wiener-MLP method attained 100% accuracy for the two-class training dataset and 100% for the validation dataset. For the three-class scenario, the Wiener-MLP method demonstrated 100% accuracy in the training dataset and 95.3% accuracy in the validation dataset. The Wiener-KNN method yielded 96.3% accuracy for the two-class training dataset and 94.5% for the validation dataset. In the three-class scenario, it achieved 85.3% accuracy in the training dataset and 77.2% in the validation dataset. The Wiener-Random Forest method achieved 100% accuracy for the two-class training dataset and 85% for the validation dataset, while in the three-class training dataset, it attained 100% accuracy and 90.8% accuracy for the validation dataset. The exceptional accuracy demonstrated by the Wiener-CNN method underscores its effectiveness in accurately identifying and classifying fault conditions in rotating machinery. The proposed fault detection system utilizes vibration data analysis and advanced machine learning techniques to improve operational reliability and productivity. By adopting the Wiener-CNN method, industrial systems can benefit from enhanced fault detection capabilities, facilitating proactive maintenance and reducing equipment downtime.

Keywords: fault detection, gearbox, machine learning, wiener method

Procedia PDF Downloads 60
1164 Damping Optimal Design of Sandwich Beams Partially Covered with Damping Patches

Authors: Guerich Mohamed, Assaf Samir

Abstract:

The application of viscoelastic materials in the form of constrained layers in mechanical structures is an efficient and cost-effective technique for solving noise and vibration problems. This technique requires a design tool to select the best location, type, and thickness of the damping treatment. This paper presents a finite element model for the vibration of beams partially or fully covered with a constrained viscoelastic damping material. The model is based on Bernoulli-Euler theory for the faces and Timoshenko beam theory for the core. It uses four variables: the through-thickness constant deflection, the axial displacements of the faces, and the bending rotation of the beam. The sandwich beam finite element is compatible with the conventional C1 finite element for homogenous beams. To validate the proposed model, several free vibration analyses of fully or partially covered beams, with different locations of the damping patches and different percent coverage, are studied. The results show that the proposed approach can be used as an effective tool to study the influence of the location and treatment size on the natural frequencies and the associated modal loss factors. Then, a parametric study regarding the variation in the damping characteristics of partially covered beams has been conducted. In these studies, the effect of core shear modulus value, the effect of patch size variation, the thickness of constraining layer, and the core and the locations of the patches are considered. In partial coverage, the spatial distribution of additive damping by using viscoelastic material is as important as the thickness and material properties of the viscoelastic layer and the constraining layer. Indeed, to limit added mass and to attain maximum damping, the damping patches should be placed at optimum locations. These locations are often selected using the modal strain energy indicator. Following this approach, the damping patches are applied over regions of the base structure with the highest modal strain energy to target specific modes of vibration. In the present study, a more efficient indicator is proposed, which consists of placing the damping patches over regions of high energy dissipation through the viscoelastic layer of the fully covered sandwich beam. The presented approach is used in an optimization method to select the best location for the damping patches as well as the material thicknesses and material properties of the layers that will yield optimal damping with the minimum area of coverage.

Keywords: finite element model, damping treatment, viscoelastic materials, sandwich beam

Procedia PDF Downloads 135
1163 Optimization of MAG Welding Process Parameters Using Taguchi Design Method on Dead Mild Steel

Authors: Tadele Tesfaw, Ajit Pal Singh, Abebaw Mekonnen Gezahegn

Abstract:

Welding is a basic manufacturing process for making components or assemblies. Recent welding economics research has focused on developing the reliable machinery database to ensure optimum production. Research on welding of materials like steel is still critical and ongoing. Welding input parameters play a very significant role in determining the quality of a weld joint. The metal active gas (MAG) welding parameters are the most important factors affecting the quality, productivity and cost of welding in many industrial operations. The aim of this study is to investigate the optimization process parameters for metal active gas welding for 60x60x5mm dead mild steel plate work-piece using Taguchi method to formulate the statistical experimental design using semi-automatic welding machine. An experimental study was conducted at Bishoftu Automotive Industry, Bishoftu, Ethiopia. This study presents the influence of four welding parameters (control factors) like welding voltage (volt), welding current (ampere), wire speed (m/min.), and gas (CO2) flow rate (lit./min.) with three different levels for variability in the welding hardness. The objective functions have been chosen in relation to parameters of MAG welding i.e., welding hardness in final products. Nine experimental runs based on an L9 orthogonal array Taguchi method were performed. An orthogonal array, signal-to-noise (S/N) ratio and analysis of variance (ANOVA) are employed to investigate the welding characteristics of dead mild steel plate and used in order to obtain optimum levels for every input parameter at 95% confidence level. The optimal parameters setting was found is welding voltage at 22 volts, welding current at 125 ampere, wire speed at 2.15 m/min and gas flow rate at 19 l/min by using the Taguchi experimental design method within the constraints of the production process. Finally, six conformations welding have been carried out to compare the existing values; the predicated values with the experimental values confirm its effectiveness in the analysis of welding hardness (quality) in final products. It is found that welding current has a major influence on the quality of welded joints. Experimental result for optimum setting gave a better hardness of welding condition than initial setting. This study is valuable for different material and thickness variation of welding plate for Ethiopian industries.

Keywords: Weld quality, metal active gas welding, dead mild steel plate, orthogonal array, analysis of variance, Taguchi method

Procedia PDF Downloads 468
1162 Digital Image Correlation: Metrological Characterization in Mechanical Analysis

Authors: D. Signore, M. Ferraiuolo, P. Caramuta, O. Petrella, C. Toscano

Abstract:

The Digital Image Correlation (DIC) is a newly developed optical technique that is spreading in all engineering sectors because it allows the non-destructive estimation of the entire surface deformation without any contact with the component under analysis. These characteristics make the DIC very appealing in all the cases the global deformation state is to be known without using strain gages, which are the most used measuring device. The DIC is applicable to any material subjected to distortion caused by either thermal or mechanical load, allowing to obtain high-definition mapping of displacements and deformations. That is why in the civil and the transportation industry, DIC is very useful for studying the behavior of metallic materials as well as of composite materials. DIC is also used in the medical field for the characterization of the local strain field of the vascular tissues surface subjected to uniaxial tensile loading. DIC can be carried out in the two dimension mode (2D DIC) if a single camera is used or in a three dimension mode (3D DIC) if two cameras are involved. Each point of the test surface framed by the cameras can be associated with a specific pixel of the image, and the coordinates of each point are calculated knowing the relative distance between the two cameras together with their orientation. In both arrangements, when a component is subjected to a load, several images related to different deformation states can be are acquired through the cameras. A specific software analyzes the images via the mutual correlation between the reference image (obtained without any applied load) and those acquired during the deformation giving the relative displacements. In this paper, a metrological characterization of the digital image correlation is performed on aluminum and composite targets both in static and dynamic loading conditions by comparison between DIC and strain gauges measures. In the static test, interesting results have been obtained thanks to an excellent agreement between the two measuring techniques. In addition, the deformation detected by the DIC is compliant with the result of a FEM simulation. In the dynamic test, the DIC was able to follow with a good accuracy the periodic deformation of the specimen giving results coherent with the ones given by FEM simulation. In both situations, it was seen that the DIC measurement accuracy depends on several parameters such as the optical focusing, the parameters chosen to perform the mutual correlation between the images and, finally, the reference points on image to be analyzed. In the future, the influence of these parameters will be studied, and a method to increase the accuracy of the measurements will be developed in accordance with the requirements of the industries especially of the aerospace one.

Keywords: accuracy, deformation, image correlation, mechanical analysis

Procedia PDF Downloads 295
1161 Two-Stage Estimation of Tropical Cyclone Intensity Based on Fusion of Coarse and Fine-Grained Features from Satellite Microwave Data

Authors: Huinan Zhang, Wenjie Jiang

Abstract:

Accurate estimation of tropical cyclone intensity is of great importance for disaster prevention and mitigation. Existing techniques are largely based on satellite imagery data, and research and utilization of the inner thermal core structure characteristics of tropical cyclones still pose challenges. This paper presents a two-stage tropical cyclone intensity estimation network based on the fusion of coarse and fine-grained features from microwave brightness temperature data. The data used in this network are obtained from the thermal core structure of tropical cyclones through the Advanced Technology Microwave Sounder (ATMS) inversion. Firstly, the thermal core information in the pressure direction is comprehensively expressed through the maximal intensity projection (MIP) method, constructing coarse-grained thermal core images that represent the tropical cyclone. These images provide a coarse-grained feature range wind speed estimation result in the first stage. Then, based on this result, fine-grained features are extracted by combining thermal core information from multiple view profiles with a distributed network and fused with coarse-grained features from the first stage to obtain the final two-stage network wind speed estimation. Furthermore, to better capture the long-tail distribution characteristics of tropical cyclones, focal loss is used in the coarse-grained loss function of the first stage, and ordinal regression loss is adopted in the second stage to replace traditional single-value regression. The selection of tropical cyclones spans from 2012 to 2021, distributed in the North Atlantic (NA) regions. The training set includes 2012 to 2017, the validation set includes 2018 to 2019, and the test set includes 2020 to 2021. Based on the Saffir-Simpson Hurricane Wind Scale (SSHS), this paper categorizes tropical cyclone levels into three major categories: pre-hurricane, minor hurricane, and major hurricane, with a classification accuracy rate of 86.18% and an intensity estimation error of 4.01m/s for NA based on this accuracy. The results indicate that thermal core data can effectively represent the level and intensity of tropical cyclones, warranting further exploration of tropical cyclone attributes under this data.

Keywords: Artificial intelligence, deep learning, data mining, remote sensing

Procedia PDF Downloads 40
1160 Impact of Reproductive Technologies on Women's Lives in New Delhi: A Study from Feminist Perspective

Authors: Zairunisha

Abstract:

This paper is concerned with the ways in which Assisted Reproductive Technologies (ARTs) affect women’s lives and perceptions regarding their infertility, contraception and reproductive health. Like other female animals, nature has ordained human female with the biological potential of procreation and becoming mother. However, during the last few decades, this phenomenal disposition of women has become a technological affair to achieve fertility and contraception. Medical practices in patriarchal societies are governed by male scientists, technical and medical professionals who try to control women as procreator instead of providing them choices. The use of ARTs presents innumerable waxed ethical questions and issues such as: the place and role of a child in a woman’s life, freedom of women to make their choices related to use of ARTs, challenges and complexities women face at social and personal levels regarding use of ARTs, effect of ARTs on their life as mothers and other relationships. The paper is based on a survey study to explore and analyze the above ethical issues arising from the use of Assisted Reproductive Technologies (ARTs) by women in New Delhi, the capital of India. A rapid rate of increase in fertility clinics has been noticed recently. It is claimed that these clinics serve women by using ARTs procedures for infertile couples and individuals who want to have child or terminate a pregnancy. The study is an attempt to articulate a critique of ARTs from a feminist perspective. A qualitative feminist research methodology has been adopted for conducting the survey study. An attempt has been made to identify the ways in which a woman’s life is affected in terms of her perceptions, apprehensions, choices and decisions regarding new reproductive technologies. A sample of 18 women of New Delhi was taken to conduct in-depth interviews to investigate their perception and response concerning the use of ARTs with a focus on (i) successful use of ARTs, (ii) unsuccessful use of ARTs, (iii) use of ARTs in progress with results yet to be known. The survey was done to investigate the impact of ARTs on women’s physical, emotional, psychological conditions as well as on their social relations and choices. The complexities and challenges faced by women in the voluntary and involuntary (forced) use of ARTs in Delhi have been illustrated. A critical analysis of interviews revealed that these technologies are used and developed for making profits at the cost of women’s lives through which economically privileged women and individuals are able to purchase services from lesser ones. In this way, the amalgamation of technology and cultural traditions are redefining and re-conceptualising the traditional patterns of motherhood, fatherhood, kinship and family relations within the realm of new ways of reproduction introduced through the use of ARTs.

Keywords: reproductive technologies, infertilities, voluntary, involuntary

Procedia PDF Downloads 361
1159 Issues of Accounting of Lease and Revenue according to International Financial Reporting Standards

Authors: Nadezhda Kvatashidze, Elena Kharabadze

Abstract:

It is broadly known that lease is a flexible means of funding enterprises. Lease reduces the risk related to access and possession of assets, as well as obtainment of funding. Therefore, it is important to refine lease accounting. The lease accounting regulations under the applicable standard (International Accounting Standards 17) make concealment of liabilities possible. As a result, the information users get inaccurate and incomprehensive information and have to resort to an additional assessment of the off-balance sheet lease liabilities. In order to address the problem, the International Financial Reporting Standards Board decided to change the approach to lease accounting. With the deficiencies of the applicable standard taken into account, the new standard (IFRS 16 ‘Leases’) aims at supplying appropriate and fair lease-related information to the users. Save certain exclusions; the lessee is obliged to recognize all the lease agreements in its financial report. The approach was determined by the fact that under the lease agreement, rights and obligations arise by way of assets and liabilities. Immediately upon conclusion of the lease agreement, the lessee takes an asset into its disposal and assumes the obligation to effect the lease-related payments in order to meet the recognition criteria defined by the Conceptual Framework for Financial Reporting. The payments are to be entered into the financial report. The new lease accounting standard secures supply of quality and comparable information to the financial information users. The International Accounting Standards Board and the US Financial Accounting Standards Board jointly developed IFRS 15: ‘Revenue from Contracts with Customers’. The standard allows the establishment of detailed revenue recognition practical criteria such as identification of the performance obligations in the contract, determination of the transaction price and its components, especially price variable considerations and other important components, as well as passage of control over the asset to the customer. IFRS 15: ‘Revenue from Contracts with Customers’ is very similar to the relevant US standards and includes requirements more specific and consistent than those of the standards in place. The new standard is going to change the recognition terms and techniques in the industries, such as construction, telecommunications (mobile and cable networks), licensing (media, science, franchising), real property, software etc.

Keywords: assessment of the lease assets and liabilities, contractual liability, division of contract, identification of contracts, contract price, lease identification, lease liabilities, off-balance sheet, transaction value

Procedia PDF Downloads 300
1158 A Mixed-Method Study Exploring Expressive Writing as a Brief Intervention Targeting Mental Health and Wellbeing in Higher Education Students: A Focus on the Quantitative Findings

Authors: Gemma Reynolds, Deborah Bailey Rodriguez, Maria Paula Valdivieso Rueda

Abstract:

In recent years, the mental health of Higher Education (HE) students has been a growing concern. This has been further exacerbated by the stresses associated with the Covid-19 pandemic, placing students at even greater risk of developing mental health issues. Support available to students in HE tends to follow an established and traditional route. The demands for counselling services have grown, not only with the increase in student numbers but with the number of students seeking support for mental health issues. One way of improving well-being and mental health in HE students is through the use of brief interventions, such as expressive writing (EW). This intervention involves encouraging individuals to write continuously for at least 15-20 minutes for three to five sessions (often on consecutive days) about their deepest thoughts and feelings to explore significant personal experiences in a meaningful way. Given the brevity, simplicity and cost-effectiveness of EW, this intervention has considerable potential as an intervention for HE populations. The current study, therefore, employed a mixed-methods design to explore the effectiveness of EW in reducing anxiety, general stress, academic stress and depression in HE students while improving well-being. HE students at MDX were randomly assigned to one of three conditions: (1) The UniExp-EW group were required to write about their emotions and thoughts about any stressors they have faced that are directly relevant to their university experience (2) The NonUniExp-EW group were required to write about their emotions and thoughts about any stressors that are NOT directly relevant to their university experience, and (3) The Control group were required to write about how they spent their weekend, with no reference to thoughts or emotions, and without thinking about university. Participants were required to carry out the EW intervention for 15minutes per day for four consecutive days. Baseline mental health and wellbeing measures were taken before the intervention via a battery of standardised questionnaires. Following completion of the intervention on day four, participants were required to complete the questionnaires a second time and again one week later. Participants were also invited to attend focus groups to discuss their experience of the intervention. This will allow an in-depth investigation into students’ perceptions of EW as an effective intervention to determine whether they would choose to use this intervention in the future. The quantitative findings will be discussed at the conference as well as a discussion of the important implications of the findings. The study is fundamental because if EW is an effective intervention for improving mental health and well-being in HE students, its brevity and simplicity means it can be easily implemented and can be freely-available to students. Improving the mental health and well-being of HE students can have knock-on implications for improving academic skills and career development.

Keywords: mental health, wellbeing, higher education students, expressive writing

Procedia PDF Downloads 75
1157 Gene Expression and Staining Agents: Exploring the Factors That Influence the Electrophoretic Properties of Fluorescent Proteins

Authors: Elif Tugce Aksun Tumerkan, Chris Lowe, Hannah Krupa

Abstract:

Fluorescent proteins are self-sufficient in forming chromophores with a visible wavelength from 3 amino acids sequence within their own polypeptide structure. This chromophore – a molecule that absorbs a photon of light and exhibits an energy transition equal to the energy of the absorbed photon. Fluorescent proteins (FPs) consisted of a chain of 238 amino acid residues and composed of 11 beta strands shaped in a cylinder surrounding an alpha helix structure. A better understanding of the system of the chromospheres and the increasing advance in protein engineering in recent years, the properties of FPs offers the potential for new applications. They have used sensors and probes in molecular biology and cell-based research that giving a chance to observe these FPs tagged cell localization, structural variation and movement. For clarifying functional uses of fluorescent proteins, electrophoretic properties of these proteins are one of the most important parameters. Sodium dodecyl sulphate polyacrylamide gel electrophoresis (SDS-PAGE) analysis is used for determining electrophoretic properties commonly. While there are many techniques are used for determining the functionality of protein-based research, SDS-PAGE analysis can only provide a molecular level assessment of the proteolytic fragments. Before SDS-PAGE analysis, fluorescent proteins need to successfully purified. Due to directly purification of the target, FPs is difficult from the animal, gene expression is commonly used which must be done by transformation with the plasmid. Furthermore, used gel within electrophoresis and staining agents properties have a key role. In this review, the different factors that have the impact on the electrophoretic properties of fluorescent proteins explored. Fluorescent protein separation and purification are the essential steps before electrophoresis that should be done very carefully. For protein purification, gene expression process and following steps have a significant function. For successful gene expression, the properties of selected bacteria for expression, used plasmid are essential. Each bacteria has own characteristics which are very sensitive to gene expression, also used procedure is the important factor for fluorescent protein expression. Another important factors are gel formula and used staining agents. Gel formula has an effect on the specific proteins mobilization and staining with correct agents is a key step for visualization of electrophoretic bands of protein. Visuality of proteins can be changed depending on staining reagents. Apparently, this review has emphasized that gene expression and purification have a stronger effect than electrophoresis protocol and staining agents.

Keywords: cell biology, gene expression, staining agents, SDS-page

Procedia PDF Downloads 172
1156 Cessna Citation X Business Aircraft Stability Analysis Using Linear Fractional Representation LFRs Model

Authors: Yamina Boughari, Ruxandra Mihaela Botez, Florian Theel, Georges Ghazi

Abstract:

Clearance of flight control laws of a civil aircraft is a long and expensive process in the Aerospace industry. Thousands of flight combinations in terms of speeds, altitudes, gross weights, centers of gravity and angles of attack have to be investigated, and proved to be safe. Nonetheless, in this method, a worst flight condition can be easily missed, and its missing would lead to a critical situation. Definitively, it would be impossible to analyze a model because of the infinite number of cases contained within its flight envelope, that might require more time, and therefore more design cost. Therefore, in industry, the technique of the flight envelope mesh is commonly used. For each point of the flight envelope, the simulation of the associated model ensures the satisfaction or not of specifications. In order to perform fast, comprehensive and effective analysis, other varying parameters models were developed by incorporating variations, or uncertainties in the nominal models, known as Linear Fractional Representation LFR models; these LFR models were able to describe the aircraft dynamics by taking into account uncertainties over the flight envelope. In this paper, the LFRs models are developed using the speeds and altitudes as varying parameters; The LFR models were built using several flying conditions expressed in terms of speeds and altitudes. The use of such a method has gained a great interest by the aeronautical companies that have seen a promising future in the modeling, and particularly in the design and certification of control laws. In this research paper, we will focus on the Cessna Citation X open loop stability analysis. The data are provided by a Research Aircraft Flight Simulator of Level D, that corresponds to the highest level flight dynamics certification; this simulator was developed by CAE Inc. and its development was based on the requirements of research at the LARCASE laboratory. The acquisition of these data was used to develop a linear model of the airplane in its longitudinal and lateral motions, and was further used to create the LFR’s models for 12 XCG /weights conditions, and thus the whole flight envelope using a friendly Graphical User Interface developed during this study. Then, the LFR’s models are analyzed using Interval Analysis method based upon Lyapunov function, and also the ‘stability and robustness analysis’ toolbox. The results were presented under the form of graphs, thus they have offered good readability, and were easily exploitable. The weakness of this method stays in a relatively long calculation, equal to about four hours for the entire flight envelope.

Keywords: flight control clearance, LFR, stability analysis, robustness analysis

Procedia PDF Downloads 336
1155 Folding of β-Structures via the Polarized Structure-Specific Backbone Charge (PSBC) Model

Authors: Yew Mun Yip, Dawei Zhang

Abstract:

Proteins are the biological machinery that executes specific vital functions in every cell of the human body by folding into their 3D structures. When a protein misfolds from its native structure, the machinery will malfunction and lead to misfolding diseases. Although in vitro experiments are able to conclude that the mutations of the amino acid sequence lead to incorrectly folded protein structures, these experiments are unable to decipher the folding process. Therefore, molecular dynamic (MD) simulations are employed to simulate the folding process so that our improved understanding of the folding process will enable us to contemplate better treatments for misfolding diseases. MD simulations make use of force fields to simulate the folding process of peptides. Secondary structures are formed via the hydrogen bonds formed between the backbone atoms (C, O, N, H). It is important that the hydrogen bond energy computed during the MD simulation is accurate in order to direct the folding process to the native structure. Since the atoms involved in a hydrogen bond possess very dissimilar electronegativities, the more electronegative atom will attract greater electron density from the less electronegative atom towards itself. This is known as the polarization effect. Since the polarization effect changes the electron density of the two atoms in close proximity, the atomic charges of the two atoms should also vary based on the strength of the polarization effect. However, the fixed atomic charge scheme in force fields does not account for the polarization effect. In this study, we introduce the polarized structure-specific backbone charge (PSBC) model. The PSBC model accounts for the polarization effect in MD simulation by updating the atomic charges of the backbone hydrogen bond atoms according to equations derived between the amount of charge transferred to the atom and the length of the hydrogen bond, which are calculated from quantum-mechanical calculations. Compared to other polarizable models, the PSBC model does not require quantum-mechanical calculations of the peptide simulated at every time-step of the simulation and maintains the dynamic update of atomic charges, thereby reducing the computational cost and time while accounting for the polarization effect dynamically at the same time. The PSBC model is applied to two different β-peptides, namely the Beta3s/GS peptide, a de novo designed three-stranded β-sheet whose structure is folded in vitro and studied by NMR, and the trpzip peptides, a double-stranded β-sheet where a correlation is found between the type of amino acids that constitute the β-turn and the β-propensity.

Keywords: hydrogen bond, polarization effect, protein folding, PSBC

Procedia PDF Downloads 247