Search results for: levelized cost of geothermal
643 An Evolutionary Approach for QAOA for Max-Cut
Authors: Francesca Schiavello
Abstract:
This work aims to create a hybrid algorithm, combining Quantum Approximate Optimization Algorithm (QAOA) with an Evolutionary Algorithm (EA) in the place of traditional gradient based optimization processes. QAOA’s were first introduced in 2014, where, at the time, their algorithm performed better than the traditional best known classical algorithm for Max-cut graphs. Whilst classical algorithms have improved since then and have returned to being faster and more efficient, this was a huge milestone for quantum computing, and their work is often used as a benchmarking tool and a foundational tool to explore variants of QAOA’s. This, alongside with other famous algorithms like Grover’s or Shor’s, highlights to the world the potential that quantum computing holds. It also presents the reality of a real quantum advantage where, if the hardware continues to improve, this could constitute a revolutionary era. Given that the hardware is not there yet, many scientists are working on the software side of things in the hopes of future progress. Some of the major limitations holding back quantum computing are the quality of qubits and the noisy interference they generate in creating solutions, the barren plateaus that effectively hinder the optimization search in the latent space, and the availability of number of qubits limiting the scale of the problem that can be solved. These three issues are intertwined and are part of the motivation for using EAs in this work. Firstly, EAs are not based on gradient or linear optimization methods for the search in the latent space, and because of their freedom from gradients, they should suffer less from barren plateaus. Secondly, given that this algorithm performs a search in the solution space through a population of solutions, it can also be parallelized to speed up the search and optimization problem. The evaluation of the cost function, like in many other algorithms, is notoriously slow, and the ability to parallelize it can drastically improve the competitiveness of QAOA’s with respect to purely classical algorithms. Thirdly, because of the nature and structure of EA’s, solutions can be carried forward in time, making them more robust to noise and uncertainty. Preliminary results show that the EA algorithm attached to QAOA can perform on par with the traditional QAOA with a Cobyla optimizer, which is a linear based method, and in some instances, it can even create a better Max-Cut. Whilst the final objective of the work is to create an algorithm that can consistently beat the original QAOA, or its variants, due to either speedups or quality of the solution, this initial result is promising and show the potential of EAs in this field. Further tests need to be performed on an array of different graphs with the parallelization aspect of the work commencing in October 2023 and tests on real hardware scheduled for early 2024.Keywords: evolutionary algorithm, max cut, parallel simulation, quantum optimization
Procedia PDF Downloads 60642 Pre- and Post-Brexit Experiences of the Bulgarian Working Class Migrants: Qualitative and Quantitative Approaches
Authors: Mariyan Tomov
Abstract:
Bulgarian working class immigrants are increasingly concerned with UK’s recent immigration policies in the context of Brexit. The new ID system would exclude many people currently working in Britain and would break the usual immigrant travel patterns. Post-Brexit Britain would aim to repeal seasonal immigrants. Measures for keeping long-term and life-long immigrants have been implemented and migrants that aim to remain in Britain and establish a household there would be more privileged than temporary or seasonal workers. The results of such regulating mechanisms come at the expense of migrants’ longings for a ‘normal’ existence, especially for those coming from Central and Eastern Europe. Based on in-depth interviews with Bulgarian working class immigrants, the study found out that their major concerns following the decision of the UK to leave the EU are related with the freedom to travel, reside and work in the UK. Furthermore, many of the interviewed women are concerned that they could lose some of the EU's fundamental rights, such as maternity and protection of pregnant women from unlawful dismissal. The soar of commodity prices and university fees and the limited access to public services, healthcare and social benefits in the UK, are also subject to discussion in the paper. The most serious problem, according to the interview, is that the attitude towards Bulgarians and other immigrants in the UK is deteriorating. Both traditional and social media in the UK often portray the migrants negatively by claiming that they take British job positions while simultaneously abuse the welfare system. As a result, the Bulgarian migrants often face social exclusion, which might have negative influence on their health and welfare. In this sense, some of the interviewed stress on the fact that the most important changes after Brexit must take place in British society itself. The aim of the proposed study is to provide a better understanding of the Bulgarian migrants’ economic, health and sociocultural experience in the context of Brexit. Methodologically, the proposed paper leans on: 1. Analysing ethnographic materials dedicated to the pre- and post-migratory experiences of Bulgarian working class migrants, using SPSS. 2. Semi-structured interviews are conducted with more than 50 Bulgarian working class migrants [N > 50] in the UK, between 18 and 65 years. The communication with the interviewees was possible via Viber/Skype or face-to-face interaction. 3. The analysis is guided by theoretical frameworks. The paper has been developed within the framework of the research projects of the National Scientific Fund of Bulgaria: DCOST 01/25-20.02.2017 supporting COST Action CA16111 ‘International Ethnic and Immigrant Minorities Survey Data Network’.Keywords: Bulgarian migrants in UK, economic experiences, sociocultural experiences, Brexit
Procedia PDF Downloads 130641 Feedback from a Service Evaluation of a Modified Intrauterine Device Insertor: A First Step to a Changement of the Standard of Iud Insertion Procedure
Authors: Desjardin, Michaels, Martinez, Ulmann
Abstract:
Copper IUD is one of the most efficient and cost-effective contraception. However, pain at insertion hampers the use of this method. This is especially unfortunate in nulliparous women, often younger, who are excellent candidates for this contraception, including Emergency Contraception. Standard insertion procedure of a copper IUD usually involves measurement of uterine cavity with an hysterometer and the use of a tenaculum in order to facilitate device insertion. Both procedures lead to patient pain which often constitutes a limitation of the method. To overcome these issues, we have developed a modified insertor combined with a copper IUD. The singular design of the inserter includes a flexible inflatable membrane technology allowing an easy access to the uterine cavity even in case of abnormal uterine positions or narrow cervical canal. Moreover, this inserter makes possible a direct IUD insertion with no hysterometry and no need for tenaculum. To assess device effectiveness and patient-reported pain, a study was conducted at two clinics in Fance with 31 individuals who wanted to use a copper IUD as contraceptive method. IUD insertions have been performed by four healthcare providers. Operators completed questionnaire and evaluated effectiveness of the procedure (including IUD correct fundal placement and other usability questions) as their satisfaction. Patient also completed questionnaire and pain during procedure was measured on a 10-cm Visual Analogue Scale (VAS). Analysis of the questionnaires indicates that correct IUD placement took place in more than 93% of women, which is a standard efficacy rate. It also demonstrates that IUD insertion resulted in no, light or moderate pain predominantly in nulliparous women. No insertion resulted in severe pain (none above 6cm on a 10-cm VAS). This translated by a high level of satisfaction from both patients and practitioners. In addition, this modified inserter allowed a simplification of the insertion procedure: correct fundal placement was ensured with no need for hysterometry (100%) prior to insertion nor for cervical tenaculum to pull on the cervix (90%). Avoidance of both procedures contributed to the decrease in pain during insertion. Taken together, the results of the study demonstrate that this device constitutes a significant advance in the use of copper IUDs for any woman. It allows a simplification of the insertion procedure: there is no need for pre-insertion hysterometry and no need for traction on the cervix with tenaculum. Increased comfort during insertion should allow a wider use of the method for nulliparous women and for emergency contraception. In addition, pain is often underestimated by practitioners, but fear of pain is obviously one of the blocking factors as indicated by the analysis of the questionnaire. This evaluation brings interesting information on the use of this modified inserter for standard copper IUD and promising perspectives to set up a changement in the standard of IUD insertion procedure.Keywords: contraceptio, IUD, innovation, pain
Procedia PDF Downloads 84640 Applicability and Reusability of Fly Ash and Base Treated Fly Ash for Adsorption of Catechol from Aqueous Solution: Equilibrium, Kinetics, Thermodynamics and Modeling
Authors: S. Agarwal, A. Rani
Abstract:
Catechol is a natural polyphenolic compound that widely exists in higher plants such as teas, vegetables, fruits, tobaccos, and some traditional Chinese medicines. The fly ash-based zeolites are capable of absorbing a wide range of pollutants. But the process of zeolite synthesis is time-consuming and requires technical setups by the industries. The marketed costs of zeolites are quite high restricting its use by small-scale industries for the removal of phenolic compounds. The present research proposes a simple method of alkaline treatment of FA to produce an effective adsorbent for catechol removal from wastewater. The experimental parameter such as pH, temperature, initial concentration and adsorbent dose on the removal of catechol were studied in batch reactor. For this purpose the adsorbent materials were mixed with aqueous solutions containing catechol ranging in 50 – 200 mg/L initial concentrations and then shaken continuously in a thermostatic Orbital Incubator Shaker at 30 ± 0.1 °C for 24 h. The samples were withdrawn from the shaker at predetermined time interval and separated by centrifugation (Centrifuge machine MBL-20) at 2000 rpm for 4 min. to yield a clear supernatant for analysis of the equilibrium concentrations of the solutes. The concentrations were measured with Double Beam UV/Visible spectrophotometer (model Spectrscan UV 2600/02) at the wavelength of 275 nm for catechol. In the present study, the use of low-cost adsorbent (BTFA) derived from coal fly ash (FA), has been investigated as a substitute of expensive methods for the sequestration of catechol. The FA and BTFA adsorbents were well characterized by XRF, FE-SEM with EDX, FTIR, and surface area and porosity measurement which proves the chemical constituents, functional groups and morphology of the adsorbents. The catechol adsorption capacities of synthesized BTFA and native material were determined. The adsorption was slightly increased with an increase in pH value. The monolayer adsorption capacities of FA and BTFA for catechol were 100 mg g⁻¹ and 333.33 mg g⁻¹ respectively, and maximum adsorption occurs within 60 minutes for both adsorbents used in this test. The equilibrium data are fitted by Freundlich isotherm found on the basis of error analysis (RMSE, SSE, and χ²). Adsorption was found to be spontaneous and exothermic on the basis of thermodynamic parameters (ΔG°, ΔS°, and ΔH°). Pseudo-second-order kinetic model better fitted the data for both FA and BTFA. BTFA showed large adsorptive characteristics, high separation selectivity, and excellent recyclability than FA. These findings indicate that BTFA could be employed as an effective and inexpensive adsorbent for the removal of catechol from wastewater.Keywords: catechol, fly ash, isotherms, kinetics, thermodynamic parameters
Procedia PDF Downloads 127639 Bringing the World to Net Zero Carbon Dioxide by Sequestering Biomass Carbon
Authors: Jeffrey A. Amelse
Abstract:
Many corporations aspire to become Net Zero Carbon Carbon Dioxide by 2035-2050. This paper examines what it will take to achieve those goals. Achieving Net Zero CO₂ requires an understanding of where energy is produced and consumed, the magnitude of CO₂ generation, and proper understanding of the Carbon Cycle. The latter leads to the distinction between CO₂ and biomass carbon sequestration. Short reviews are provided for prior technologies proposed for reducing CO₂ emissions from fossil fuels or substitution by renewable energy, to focus on their limitations and to show that none offer a complete solution. Of these, CO₂ sequestration is poised to have the largest impact. It will just cost money, scale-up is a huge challenge, and it will not be a complete solution. CO₂ sequestration is still in the demonstration and semi-commercial scale. Transportation accounts for only about 30% of total U.S. energy demand, and renewables account for only a small fraction of that sector. Yet, bioethanol production consumes 40% of U.S. corn crop, and biodiesel consumes 30% of U.S. soybeans. It is unrealistic to believe that biofuels can completely displace fossil fuels in the transportation market. Bioethanol is traced through its Carbon Cycle and shown to be both energy inefficient and inefficient use of biomass carbon. Both biofuels and CO₂ sequestration reduce future CO₂ emissions from continued use of fossil fuels. They will not remove CO₂ already in the atmosphere. Planting more trees has been proposed as a way to reduce atmospheric CO₂. Trees are a temporary solution. When they complete their Carbon Cycle, they die and release their carbon as CO₂ to the atmosphere. Thus, planting more trees is just 'kicking the can down the road.' The only way to permanently remove CO₂ already in the atmosphere is to break the Carbon Cycle by growing biomass from atmospheric CO₂ and sequestering biomass carbon. Sequestering tree leaves is proposed as a solution. Unlike wood, leaves have a short Carbon Cycle time constant. They renew and decompose every year. Allometric equations from the USDA indicate that theoretically, sequestrating only a fraction of the world’s tree leaves can get the world to Net Zero CO₂ without disturbing the underlying forests. How can tree leaves be permanently sequestered? It may be as simple as rethinking how landfills are designed to discourage instead of encouraging decomposition. In traditional landfills, municipal waste undergoes rapid initial aerobic decomposition to CO₂, followed by slow anaerobic decomposition to methane and CO₂. The latter can take hundreds to thousands of years. The first step in anaerobic decomposition is hydrolysis of cellulose to release sugars, which those who have worked on cellulosic ethanol know is challenging for a number of reasons. The key to permanent leaf sequestration may be keeping the landfills dry and exploiting known inhibitors for anaerobic bacteria.Keywords: carbon dioxide, net zero, sequestration, biomass, leaves
Procedia PDF Downloads 130638 Investigating the Essentiality of Oxazolidinones in Resistance-Proof Drug Combinations in Mycobacterium tuberculosis Selected under in vitro Conditions
Authors: Gail Louw, Helena Boshoff, Taeksun Song, Clifton Barry
Abstract:
Drug resistance in Mycobacterium tuberculosis is primarily attributed to mutations in target genes. These mutations incur a fitness cost and result in bacterial generations that are less fit, which subsequently acquire compensatory mutations to restore fitness. We hypothesize that mutations in specific drug target genes influence bacterial metabolism and cellular function, which affects its ability to develop subsequent resistance to additional agents. We aim to determine whether the sequential acquisition of drug resistance and specific mutations in a well-defined clinical M. tuberculosis strain promotes or limits the development of additional resistance. In vitro mutants resistant to pretomanid, linezolid, moxifloxacin, rifampicin and kanamycin were generated from a pan-susceptible clinical strain from the Beijing lineage. The resistant phenotypes to the anti-TB agents were confirmed by the broth microdilution assay and genetic mutations were identified by targeted gene sequencing. Growth of mono-resistant mutants was done in enriched medium for 14 days to assess in vitro fitness. Double resistant mutants were generated against anti-TB drug combinations at concentrations 5x and 10x the minimum inhibitory concentration. Subsequently, mutation frequencies for these anti-TB drugs in the different mono-resistant backgrounds were determined. The initial level of resistance and the mutation frequencies observed for the mono-resistant mutants were comparable to those previously reported. Targeted gene sequencing revealed the presence of known and clinically relevant mutations in the mutants resistant to linezolid, rifampicin, kanamycin and moxifloxacin. Significant growth defects were observed for mutants grown under in vitro conditions compared to the sensitive progenitor. Mutation frequencies determination in the mono-resistant mutants revealed a significant increase in mutation frequency against rifampicin and kanamycin, but a significant decrease in mutation frequency against linezolid and sutezolid. This suggests that these mono-resistant mutants are more prone to develop resistance to rifampicin and kanamycin, but less prone to develop resistance against linezolid and sutezolid. Even though kanamycin and linezolid both inhibit protein synthesis, these compounds target different subunits of the ribosome, thereby leading to different outcomes in terms of fitness in the mutants with impaired cellular function. These observations showed that oxazolidinone treatment is instrumental in limiting the development of multi-drug resistance in M. tuberculosis in vitro.Keywords: oxazolidinones, mutations, resistance, tuberculosis
Procedia PDF Downloads 163637 Effect of Irrigation and Hydrogel on the Water Use Efficiency of Zeto-Tiled Green-Gram Relay System in the Eastern Indo Gangetic-Plain
Authors: Benukar Biswas, S. Banerjee, P. K. Bandhyopadhyaya, S. K. Patra, S. Sarkar
Abstract:
Jute can be sown as relay crop in between the lines of 15-20 days old green gram for additional pulse yield without reducing the yield of jute. The main problem of this system is water use efficiency (WUE). The increase in water productivity and reduction in production cost were reported in the zero-tilled crop. The hydrogel can hold water up to 400 times of its weight and can release 95 % of the retained water. The present field study was carried out during 2015-16 at BCKV (tropical sub-humid, 1560 mm annual rainfall, 22058/ N, 88051/ E, 9.75 m AMSL, sandy loam soil, aeric Haplaquept, pH 6.75, organic carbon 5.4 g kg-1, available N 85 kg ha-1, P2O5 15.3 kg ha-1 and K2O 40 kg ha-1) with four levels of irrigation regimes: no irrigation - RF, cumulative pan evaporation 250mm (CPE250), CPE125 and CPE83 and three levels of hydrogel: no hydrogel (H0), 2.5 kg ha-1 (H2.5) and 5 kg ha-1 (H5). Throughout the crop growing period a linear positive relationship remained between Leaf Area Index (LAI) and evapotranspiration rate. The strength of the relationship between ETa and LAI started increasing and reached its peak at 7 WAS (R2=0.78) when green gram was at its maturity, and both the crops covered the nearly entire base area. This relation starts weakening from 13 WAS due to jute leaf shading. A linear relationship between system yield and ET was also obtained in the present study. The variation in system yield might be predicted 75% with ET alone. Effective rainfall was reduced with increasing irrigation frequency due to enhanced water supply in contrast to hydrogel application due to the difference in water storage capacity. Irrigation contributed a major source of variability of ET. Higher irrigation frequency resulted in higher ET loss ranging from 574 mm in RF to 764 mm in CPE83. Hydrogel application also increased water storage on a sustained basis and supplied to crops resulting higher ET from 639 mm in H0 to 671mm in H5. WUE ranged between 0.4 kg m-3 (RF) to 0.63 kg m-3 (CPE83 H5). WUE increased with increased application of irrigation water from 0.42 kg m-3 in RF to 0.57 kg m-3 in CPE 83. Hydrogel application significantly improves the WUE from 0.45 kg m-3 in H0 to 0.50 in H2.5 and 0.54 in H5. Under relatively dry root zone (RF), both evaporation and transpiration remain at suboptimal level resulting in lower ET as well as lower system yield. Green gram – jute relay system can be water use efficient with 38% higher yield with application of hydrogel @ 2.5 kg ha-1 under deficit irrigation regime of CPE 125 over rainfed system without application of the gel. Application of gel conditioner improved water storage, checked excess water loss from the system, and mitigated ET demand of the relay system for a longer time. Hence, irrigation frequency was reduced from five times at CPE 83 to only three times in CPE 125.Keywords: zero tillage, deficit irrigation, hydrogel, relay system
Procedia PDF Downloads 236636 Evaluation of Invasive Tree Species for Production of Phosphate Bonded Composites
Authors: Stephen Osakue Amiandamhen, Schwaller Andreas, Martina Meincken, Luvuyo Tyhoda
Abstract:
Invasive alien tree species are currently being cleared in South Africa as a result of the forest and water imbalances. These species grow wildly constituting about 40% of total forest area. They compete with the ecosystem for natural resources and are considered as ecosystem engineers by rapidly changing disturbance regimes. As such, they are harvested for commercial uses but much of it is wasted because of their form and structure. The waste is being sold to local communities as fuel wood. These species can be considered as potential feedstock for the production of phosphate bonded composites. The presence of bark in wood-based composites leads to undesirable properties, and debarking as an option can be cost implicative. This study investigates the potentials of these invasive species processed without debarking on some fundamental properties of wood-based panels. Some invasive alien tree species were collected from EC Biomass, Port Elizabeth, South Africa. They include Acacia mearnsii (Black wattle), A. longifolia (Long-leaved wattle), A. cyclops (Red-eyed wattle), A. saligna (Golden-wreath wattle) and Eucalyptus globulus (Blue gum). The logs were chipped as received. The chips were hammer-milled and screened through a 1 mm sieve. The wood particles were conditioned and the quantity of bark in the wood was determined. The binding matrix was prepared using a reactive magnesia, phosphoric acid and class S fly ash. The materials were mixed and poured into a metallic mould. The composite within the mould was compressed at room temperature at a pressure of 200 KPa. After initial setting which took about 5 minutes, the composite board was demoulded and air-cured for 72 h. The cured product was thereafter conditioned at 20°C and 70% relative humidity for 48 h. Test of physical and strength properties were conducted on the composite boards. The effect of binder formulation and fly ash content on the properties of the boards was studied using fitted response surface technology, according to a central composite experimental design (CCD) at a fixed wood loading of 75% (w/w) of total inorganic contents. The results showed that phosphate/magnesia ratio of 3:1 and fly ash content of 10% was required to obtain a product of good properties and sufficient strength for intended applications. The proposed products can be used for ceilings, partitioning and insulating wall panels.Keywords: invasive alien tree species, phosphate bonded composites, physical properties, strength
Procedia PDF Downloads 295635 Enhancement of Hardness Related Properties of Grey Cast Iron Powder Reinforced AA7075 Metal Matrix Composites Through T6 and T8 Heat Treatments
Authors: S. S. Sharma, P. R. Prabhu, K. Jagannath, Achutha Kini U., Gowri Shankar M. C.
Abstract:
In present global scenario, aluminum alloys are coining the attention of many innovators as competing structural materials for automotive and space applications. Comparing to other challenging alloys, especially, 7xxx series aluminum alloys have been studied seriously because of their benefits such as moderate strength; better deforming characteristics, excellent chemical decay resistance, and affordable cost. 7075 Al-alloys have been used in the transportation industry for the fabrication of several types of automobile parts, such as wheel covers, panels and structures. It is expected that substitution of such aluminum alloys for steels will result in great improvements in energy economy, durability and recyclability. However, it is necessary to improve the strength and the formability levels at low temperatures in aluminium alloys for still better applications. Aluminum–Zinc–Magnesium with or without other wetting agent denoted as 7XXX series alloys are medium strength heat treatable alloys. Cu, Mn and Si are the other solute elements which contribute for the improvement in mechanical properties achievable by selecting and tailoring the suitable heat treatment process. On subjecting to suitable treatments like age hardening or cold deformation assisted heat treatments, known as low temperature thermomechanical treatments (LTMT) the challenging properties might be incorporated. T6 is the age hardening or precipitation hardening process with artificial aging cycle whereas T8 comprises of LTMT treatment aged artificially with X% cold deformation. When the cold deformation is provided after solution treatment, there is increase in hardness related properties such as wear resistance, yield and ultimate strength, toughness with the expense of ductility. During precipitation hardening both hardness and strength of the samples are increasing. Decreasing peak hardness value with increasing aging temperature is the well-known behavior of age hardenable alloys. The peak hardness value is further increasing when room temperature deformation is positively supported with age hardening known as thermomechanical treatment. Considering these aspects, it is intended to perform heat treatment and evaluate hardness, tensile strength, wear resistance and distribution pattern of reinforcement in the matrix. 2 to 2.5 and 3 to 3.5 times increase in hardness is reported in age hardening and LTMT treatments respectively as compared to as-cast composite. There was better distribution of reinforcements in the matrix, nearly two fold increase in strength levels and upto 5 times increase in wear resistance are also observed in the present study.Keywords: reinforcement, precipitation, thermomechanical, dislocation, strain hardening
Procedia PDF Downloads 312634 Radar Cross Section Modelling of Lossy Dielectrics
Authors: Ciara Pienaar, J. W. Odendaal, J. Joubert, J. C. Smit
Abstract:
Radar cross section (RCS) of dielectric objects play an important role in many applications, such as low observability technology development, drone detection, and monitoring as well as coastal surveillance. Various materials are used to construct the targets of interest such as metal, wood, composite materials, radar absorbent materials, and other dielectrics. Since simulated datasets are increasingly being used to supplement infield measurements, as it is more cost effective and a larger variety of targets can be simulated, it is important to have a high level of confidence in the predicted results. Confidence can be attained through validation. Various computational electromagnetic (CEM) methods are capable of predicting the RCS of dielectric targets. This study will extend previous studies by validating full-wave and asymptotic RCS simulations of dielectric targets with measured data. The paper will provide measured RCS data of a number of canonical dielectric targets exhibiting different material properties. As stated previously, these measurements are used to validate numerous CEM methods. The dielectric properties are accurately characterized to reduce the uncertainties in the simulations. Finally, an analysis of the sensitivity of oblique and normal incidence scattering predictions to material characteristics is also presented. In this paper, the ability of several CEM methods, including method of moments (MoM), and physical optics (PO), to calculate the RCS of dielectrics were validated with measured data. A few dielectrics, exhibiting different material properties, were selected and several canonical targets, such as flat plates and cylinders, were manufactured. The RCS of these dielectric targets were measured in a compact range at the University of Pretoria, South Africa, over a frequency range of 2 to 18 GHz and a 360° azimuth angle sweep. This study also investigated the effect of slight variations in the material properties on the calculated RCS results, by varying the material properties within a realistic tolerance range and comparing the calculated RCS results. Interesting measured and simulated results have been obtained. Large discrepancies were observed between the different methods as well as the measured data. It was also observed that the accuracy of the RCS data of the dielectrics can be frequency and angle dependent. The simulated RCS for some of these materials also exhibit high sensitivity to variations in the material properties. Comparison graphs between the measured and simulation RCS datasets will be presented and the validation thereof will be discussed. Finally, the effect that small tolerances in the material properties have on the calculated RCS results will be shown. Thus the importance of accurate dielectric material properties for validation purposes will be discussed.Keywords: asymptotic, CEM, dielectric scattering, full-wave, measurements, radar cross section, validation
Procedia PDF Downloads 243633 Production of Recombinant Human Serum Albumin in Escherichia coli: A Crucial Biomolecule for Biotechnological and Healthcare Applications
Authors: Ashima Sharma, Tapan K. Chaudhuri
Abstract:
Human Serum Albumin (HSA) is one of the most demanded therapeutic protein with immense biotechnological applications. The current source of HSA is human blood plasma. Blood is a limited and an unsafe source as it possesses the risk of contamination by various blood derived pathogens. This issue led to exploitation of various hosts with the aim to obtain an alternative source for the production of the rHSA. But, till now no host has been proven to be effective commercially for rHSA production because of their respective limitations. Thus, there exists an indispensable need to promote non-animal derived rHSA production. Of all the host systems, Escherichia coli is one of the most convenient hosts which has contributed in the production of more than 30% of the FDA approved recombinant pharmaceuticals. E. coli grows rapidly and its culture reaches high cell density using inexpensive and simple substrates. The fermentation batch turnaround number for E. coli culture is 300 per year, which is far greater than any of the host systems available. Therefore, E. coli derived recombinant products have more economical potential as fermentation processes are cheaper compared to the other expression hosts available. Despite of all the mentioned advantages, E. coli had not been successfully adopted as a host for rHSA production. The major bottleneck in exploiting E. coli as a host for rHSA production was aggregation i.e. majority of the expressed recombinant protein was forming inclusion bodies (more than 90% of the total expressed rHSA) in the E. coli cytosol. Recovery of functional rHSA form inclusion body is not preferred because it is tedious, time consuming, laborious and expensive. Because of this limitation, E. coli host system was neglected for rHSA production for last few decades. Considering the advantages of E. coli as a host, the present work has targeted E. coli as an alternate host for rHSA production through resolving the major issue of inclusion body formation associated with it. In the present study, we have developed a novel and innovative method for enhanced soluble and functional production of rHSA in E.coli (~60% of the total expressed rHSA in the soluble fraction) through modulation of the cellular growth, folding and environmental parameters, thereby leading to significantly improved and enhanced -expression levels as well as the functional and soluble proportion of the total expressed rHSA in the cytosolic fraction of the host. Therefore, in the present case we have filled in the gap in the literature, by exploiting the most well studied host system Escherichia coli which is of low cost, fast growing, scalable and ‘yet neglected’, for the enhancement of functional production of HSA- one of the most crucial biomolecule for clinical and biotechnological applications.Keywords: enhanced functional production of rHSA in E. coli, recombinant human serum albumin, recombinant protein expression, recombinant protein processing
Procedia PDF Downloads 347632 Effect of Starch and Plasticizer Types and Fiber Content on Properties of Polylactic Acid/Thermoplastic Starch Blend
Authors: Rangrong Yoksan, Amporn Sane, Nattaporn Khanoonkon, Chanakorn Yokesahachart, Narumol Noivoil, Khanh Minh Dang
Abstract:
Polylactic acid (PLA) is the most commercially available bio-based and biodegradable plastic at present. PLA has been used in plastic related industries including single-used containers, disposable and environmentally friendly packaging owing to its renewability, compostability, biodegradability, and safety. Although PLA demonstrates reasonably good optical, physical, mechanical, and barrier properties comparable to the existing petroleum-based plastics, its brittleness and mold shrinkage as well as its price are the points to be concerned for the production of rigid and semi-rigid packaging. Blending PLA with other bio-based polymers including thermoplastic starch (TPS) is an alternative not only to achieve a complete bio-based plastic, but also to reduce the brittleness, shrinkage during molding and production cost of the PLA-based products. TPS is a material produced mainly from starch which is cheap, renewable, biodegradable, compostable, and non-toxic. It is commonly prepared by a plasticization of starch under applying heat and shear force. Although glycerol has been reported as one of the most plasticizers used for preparing TPS, its migration caused the surface stickiness of the TPS products. In some cases, mixed plasticizers or natural fibers have been applied to impede the retrogradation of starch or reduce the migration of glycerol. The introduction of fibers into TPS-based materials could reinforce the polymer matrix as well. Therefore, the objective of the present research is to study the effect of starch type (i.e. native starch and phosphate starch), plasticizer type (i.e. glycerol and xylitol with a weight ratio of glycerol to xylitol of 100:0, 75:25, 50:50, 25:75, and 0:100), and fiber content (i.e. in the range of 1-25 % wt) on properties of PLA/TPS blend and composite. PLA/TPS blends and composites were prepared using a twin-screw extruder and then converted into dumbbell-shaped specimens using an injection molding machine. The PLA/TPS blends prepared by using phosphate starch showed higher tensile strength and stiffness than the blends prepared by using the native one. In contrast, the blends from native starch exhibited higher extensibility and heat distortion temperature (HDT) than those from the modified starch. Increasing xylitol content resulted in enhanced tensile strength, stiffness, and water resistance, but decreased extensibility and HDT of the PLA/TPS blend. Tensile properties and hydrophobicity of the blend could be improved by incorporating silane treated-jute fibers.Keywords: polylactic acid, thermoplastic starch, Jute fiber, composite, blend
Procedia PDF Downloads 424631 Achieving Net Zero Energy Building in a Hot Climate Using Integrated Photovoltaic and Parabolic Trough Collectors
Authors: Adel A. Ghoneim
Abstract:
In most existing buildings in hot climate, cooling loads lead to high primary energy consumption and consequently high CO2 emissions. These can be substantially decreased with integrated renewable energy systems. Kuwait is characterized by its dry hot long summer and short warm winter. Kuwait receives annual total radiation more than 5280 MJ/m2 with approximately 3347 h of sunshine. Solar energy systems consist of PV modules and parabolic trough collectors are considered to satisfy electricity consumption, domestic water heating, and cooling loads of an existing building. This paper presents the results of an extensive program of energy conservation and energy generation using integrated photovoltaic (PV) modules and parabolic trough collectors (PTC). The program conducted on an existing institutional building intending to convert it into a Net-Zero Energy Building (NZEB) or near net Zero Energy Building (nNZEB). The program consists of two phases; the first phase is concerned with energy auditing and energy conservation measures at minimum cost and the second phase considers the installation of photovoltaic modules and parabolic trough collectors. The 2-storey building under consideration is the Applied Sciences Department at the College of Technological Studies, Kuwait. Single effect lithium bromide water absorption chillers are implemented to provide air conditioning load to the building. A numerical model is developed to evaluate the performance of parabolic trough collectors in Kuwait climate. Transient simulation program (TRNSYS) is adapted to simulate the performance of different solar system components. In addition, a numerical model is developed to assess the environmental impacts of building integrated renewable energy systems. Results indicate that efficient energy conservation can play an important role in converting the existing buildings into NZEBs as it saves a significant portion of annual energy consumption of the building. The first phase results in an energy conservation of about 28% of the building consumption. In the second phase, the integrated PV completely covers the lighting and equipment loads of the building. On the other hand, parabolic trough collectors of optimum area of 765 m2 can satisfy a significant portion of the cooling load, i.e about73% of the total building cooling load. The annual avoided CO2 emission is evaluated at the optimum conditions to assess the environmental impacts of renewable energy systems. The total annual avoided CO2 emission is about 680 metric ton/year which confirms the environmental impacts of these systems in Kuwait.Keywords: building integrated renewable systems, Net-Zero energy building, solar fraction, avoided CO2 emission
Procedia PDF Downloads 611630 Hybrid Precoder Design Based on Iterative Hard Thresholding Algorithm for Millimeter Wave Multiple-Input-Multiple-Output Systems
Authors: Ameni Mejri, Moufida Hajjaj, Salem Hasnaoui, Ridha Bouallegue
Abstract:
The technology advances have most lately made the millimeter wave (mmWave) communication possible. Due to the huge amount of spectrum that is available in MmWave frequency bands, this promising candidate is considered as a key technology for the deployment of 5G cellular networks. In order to enhance system capacity and achieve spectral efficiency, very large antenna arrays are employed at mmWave systems by exploiting array gain. However, it has been shown that conventional beamforming strategies are not suitable for mmWave hardware implementation. Therefore, new features are required for mmWave cellular applications. Unlike traditional multiple-input-multiple-output (MIMO) systems for which only digital precoders are essential to accomplish precoding, MIMO technology seems to be different at mmWave because of digital precoding limitations. Moreover, precoding implements a greater number of radio frequency (RF) chains supporting more signal mixers and analog-to-digital converters. As RF chain cost and power consumption is increasing, we need to resort to another alternative. Although the hybrid precoding architecture has been regarded as the best solution based on a combination between a baseband precoder and an RF precoder, we still do not get the optimal design of hybrid precoders. According to the mapping strategies from RF chains to the different antenna elements, there are two main categories of hybrid precoding architecture. Given as a hybrid precoding sub-array architecture, the partially-connected structure reduces hardware complexity by using a less number of phase shifters, whereas it sacrifices some beamforming gain. In this paper, we treat the hybrid precoder design in mmWave MIMO systems as a problem of matrix factorization. Thus, we adopt the alternating minimization principle in order to solve the design problem. Further, we present our proposed algorithm for the partially-connected structure, which is based on the iterative hard thresholding method. Through simulation results, we show that our hybrid precoding algorithm provides significant performance gains over existing algorithms. We also show that the proposed approach reduces significantly the computational complexity. Furthermore, valuable design insights are provided when we use the proposed algorithm to make simulation comparisons between the hybrid precoding partially-connected structure and the fully-connected structure.Keywords: alternating minimization, hybrid precoding, iterative hard thresholding, low-complexity, millimeter wave communication, partially-connected structure
Procedia PDF Downloads 323629 Predicting Polyethylene Processing Properties Based on Reaction Conditions via a Coupled Kinetic, Stochastic and Rheological Modelling Approach
Authors: Kristina Pflug, Markus Busch
Abstract:
Being able to predict polymer properties and processing behavior based on the applied operating reaction conditions in one of the key challenges in modern polymer reaction engineering. Especially, for cost-intensive processes such as the high-pressure polymerization of low-density polyethylene (LDPE) with high safety-requirements, the need for simulation-based process optimization and product design is high. A multi-scale modelling approach was set-up and validated via a series of high-pressure mini-plant autoclave reactor experiments. The approach starts with the numerical modelling of the complex reaction network of the LDPE polymerization taking into consideration the actual reaction conditions. While this gives average product properties, the complex polymeric microstructure including random short- and long-chain branching is calculated via a hybrid Monte Carlo-approach. Finally, the processing behavior of LDPE -its melt flow behavior- is determined in dependence of the previously determined polymeric microstructure using the branch on branch algorithm for randomly branched polymer systems. All three steps of the multi-scale modelling approach can be independently validated against analytical data. A triple-detector GPC containing an IR, viscosimetry and multi-angle light scattering detector is applied. It serves to determine molecular weight distributions as well as chain-length dependent short- and long-chain branching frequencies. 13C-NMR measurements give average branching frequencies, and rheological measurements in shear and extension serve to characterize the polymeric flow behavior. The accordance of experimental and modelled results was found to be extraordinary, especially taking into consideration that the applied multi-scale modelling approach does not contain parameter fitting of the data. This validates the suggested approach and proves its universality at the same time. In the next step, the modelling approach can be applied to other reactor types, such as tubular reactors or industrial scale. Moreover, sensitivity analysis for systematically varying process conditions is easily feasible. The developed multi-scale modelling approach finally gives the opportunity to predict and design LDPE processing behavior simply based on process conditions such as feed streams and inlet temperatures and pressures.Keywords: low-density polyethylene, multi-scale modelling, polymer properties, reaction engineering, rheology
Procedia PDF Downloads 125628 A First-Principles Investigation of Magnesium-Hydrogen System: From Bulk to Nano
Authors: Paramita Banerjee, K. R. S. Chandrakumar, G. P. Das
Abstract:
Bulk MgH2 has drawn much attention for the purpose of hydrogen storage because of its high hydrogen storage capacity (~7.7 wt %) as well as low cost and abundant availability. However, its practical usage has been hindered because of its high hydrogen desorption enthalpy (~0.8 eV/H2 molecule), which results in an undesirable desorption temperature of 3000C at 1 bar H2 pressure. To surmount the limitations of bulk MgH2 for the purpose of hydrogen storage, a detailed first-principles density functional theory (DFT) based study on the structure and stability of neutral (Mgm) and positively charged (Mgm+) Mg nanoclusters of different sizes (m = 2, 4, 8 and 12), as well as their interaction with molecular hydrogen (H2), is reported here. It has been found that due to the absence of d-electrons within the Mg atoms, hydrogen remained in molecular form even after its interaction with neutral and charged Mg nanoclusters. Interestingly, the H2 molecules do not enter into the interstitial positions of the nanoclusters. Rather, they remain on the surface by ornamenting these nanoclusters and forming new structures with a gravimetric density higher than 15 wt %. Our observation is that the inclusion of Grimme’s DFT-D3 dispersion correction in this weakly interacting system has a significant effect on binding of the H2 molecules with these nanoclusters. The dispersion corrected interaction energy (IE) values (0.1-0.14 eV/H2 molecule) fall in the right energy window, that is ideal for hydrogen storage. These IE values are further verified by using high-level coupled-cluster calculations with non-iterative triples corrections i.e. CCSD(T), (which has been considered to be a highly accurate quantum chemical method) and thereby confirming the accuracy of our ‘dispersion correction’ incorporated DFT calculations. The significance of the polarization and dispersion energy in binding of the H2 molecules are confirmed by performing energy decomposition analysis (EDA). A total of 16, 24, 32 and 36 H2 molecules can be attached to the neutral and charged nanoclusters of size m = 2, 4, 8 and 12 respectively. Ab-initio molecular dynamics (AIMD) simulation shows that the outermost H2 molecules are desorbed at a rather low temperature viz. 150 K (-1230C) which is expected. However, complete dehydrogenation of these nanoclusters occur at around 1000C. Most importantly, the host nanoclusters remain stable up to ~500 K (2270C). All these results on the adsorption and desorption of molecular hydrogen with neutral and charged Mg nanocluster systems indicate towards the possibility of reducing the dehydrogenation temperature of bulk MgH2 by designing new Mg-based nano materials which will be able to adsorb molecular hydrogen via this weak Mg-H2 interaction, rather than the strong Mg-H bonding. Notwithstanding the fact that in practical applications, these interactions will be further complicated by the effect of substrates as well as interactions with other clusters, the present study has implications on our fundamental understanding to this problem.Keywords: density functional theory, DFT, hydrogen storage, molecular dynamics, molecular hydrogen adsorption, nanoclusters, physisorption
Procedia PDF Downloads 416627 Investigations of Effective Marketing Metric Strategies: The Case of St. George Brewery Factory, Ethiopia
Authors: Mekdes Getu Chekol, Biniam Tedros Kahsay, Rahwa Berihu Haile
Abstract:
The main objective of this study is to investigate the marketing strategy practice in the Case of St. George Brewery Factory in Addis Ababa. One of the core activities in a Business Company to stay in business is having a well-developed marketing strategy. It assessed how the marketing strategies were practiced in the company to achieve its goals aligned with segmentation, target market, positioning, and the marketing mix elements to satisfy customer requirements. Using primary and secondary data, the study is conducted by using both qualitative and quantitative approaches. The primary data was collected through open and closed-ended questionnaires. Considering the size of the population is small, the selection of the respondents was carried out by using a census. The finding shows that the company used all the 4 Ps of the marketing mix elements in its marketing strategies and provided quality products at affordable prices by promoting its products by using high and effective advertising mechanisms. The product availability and accessibility are admirable with the practices of both direct and indirect distribution channels. On the other hand, the company has identified its target customers, and the company’s market segmentation practice is geographical location. Communication effectiveness between the marketing department and other departments is very good. The adjusted R2 model explains 61.6% of the marketing strategy practice variance by product, price, promotion, and place. The remaining 38.4% of variation in the dependent variable was explained by other factors not included in this study. The result reveals that all four independent variables, product, price, promotion, and place, have a positive beta sign, proving that predictor variables have a positive effect on that of the predicting dependent variable marketing strategy practice. Even though the marketing strategies of the company are effectively practiced, there are some problems that the company faces while implementing them. These are infrastructure problems, economic problems, intensive competition in the market, shortage of raw materials, seasonality of consumption, socio-cultural problems, and the time and cost of awareness creation for the customers. Finally, the authors suggest that the company better develop a long-range view and try to implement a more structured approach to attain information about potential customers, competitor’s actions, and market intelligence within the industry. In addition, we recommend conducting the study by increasing the sample size and including different marketing factors.Keywords: marketing strategy, market segmentation, target marketing, market positioning, marketing mix
Procedia PDF Downloads 61626 An Analysis of LoRa Networks for Rainforest Monitoring
Authors: Rafael Castilho Carvalho, Edjair de Souza Mota
Abstract:
As the largest contributor to the biogeochemical functioning of the Earth system, the Amazon Rainforest has the greatest biodiversity on the planet, harboring about 15% of all the world's flora. Recognition and preservation are the focus of research that seeks to mitigate drastic changes, especially anthropic ones, which irreversibly affect this biome. Functional and low-cost monitoring alternatives to reduce these impacts are a priority, such as those using technologies such as Low Power Wide Area Networks (LPWAN). Promising, reliable, secure and with low energy consumption, LPWAN can connect thousands of IoT devices, and in particular, LoRa is considered one of the most successful solutions to facilitate forest monitoring applications. Despite this, the forest environment, in particular the Amazon Rainforest, is a challenge for these technologies, requiring work to identify and validate the use of technology in a real environment. To investigate the feasibility of deploying LPWAN in remote water quality monitoring of rivers in the Amazon Region, a LoRa-based test bed consisting of a Lora transmitter and a LoRa receiver was set up, both parts were implemented with Arduino and the LoRa chip SX1276. The experiment was carried out at the Federal University of Amazonas, which contains one of the largest urban forests in Brazil. There are several springs inside the forest, and the main goal is to collect water quality parameters and transmit the data through the forest in real time to the gateway at the uni. In all, there are nine water quality parameters of interest. Even with a high collection frequency, the amount of information that must be sent to the gateway is small. However, for this application, the battery of the transmitter device is a concern since, in the real application, the device must run without maintenance for long periods of time. With these constraints in mind, parameters such as Spreading Factor (SF) and Coding Rate (CR), different antenna heights, and distances were tuned to better the connectivity quality, measured with RSSI and loss rate. A handheld spectrum analyzer RF Explorer was used to get the RSSI values. Distances exceeding 200 m have soon proven difficult to establish communication due to the dense foliage and high humidity. The optimal combinations of SF-CR values were 8-5 and 9-5, showing the lowest packet loss rates, 5% and 17%, respectively, with a signal strength of approximately -120 dBm, these being the best settings for this study so far. The rains and climate changes imposed limitations on the equipment, and more tests are already being conducted. Subsequently, the range of the LoRa configuration must be extended using a mesh topology, especially because at least three different collection points in the same water body are required.Keywords: IoT, LPWAN, LoRa, coverage, loss rate, forest
Procedia PDF Downloads 90625 Damping Optimal Design of Sandwich Beams Partially Covered with Damping Patches
Authors: Guerich Mohamed, Assaf Samir
Abstract:
The application of viscoelastic materials in the form of constrained layers in mechanical structures is an efficient and cost-effective technique for solving noise and vibration problems. This technique requires a design tool to select the best location, type, and thickness of the damping treatment. This paper presents a finite element model for the vibration of beams partially or fully covered with a constrained viscoelastic damping material. The model is based on Bernoulli-Euler theory for the faces and Timoshenko beam theory for the core. It uses four variables: the through-thickness constant deflection, the axial displacements of the faces, and the bending rotation of the beam. The sandwich beam finite element is compatible with the conventional C1 finite element for homogenous beams. To validate the proposed model, several free vibration analyses of fully or partially covered beams, with different locations of the damping patches and different percent coverage, are studied. The results show that the proposed approach can be used as an effective tool to study the influence of the location and treatment size on the natural frequencies and the associated modal loss factors. Then, a parametric study regarding the variation in the damping characteristics of partially covered beams has been conducted. In these studies, the effect of core shear modulus value, the effect of patch size variation, the thickness of constraining layer, and the core and the locations of the patches are considered. In partial coverage, the spatial distribution of additive damping by using viscoelastic material is as important as the thickness and material properties of the viscoelastic layer and the constraining layer. Indeed, to limit added mass and to attain maximum damping, the damping patches should be placed at optimum locations. These locations are often selected using the modal strain energy indicator. Following this approach, the damping patches are applied over regions of the base structure with the highest modal strain energy to target specific modes of vibration. In the present study, a more efficient indicator is proposed, which consists of placing the damping patches over regions of high energy dissipation through the viscoelastic layer of the fully covered sandwich beam. The presented approach is used in an optimization method to select the best location for the damping patches as well as the material thicknesses and material properties of the layers that will yield optimal damping with the minimum area of coverage.Keywords: finite element model, damping treatment, viscoelastic materials, sandwich beam
Procedia PDF Downloads 149624 Electrodeposition of Silicon Nanoparticles Using Ionic Liquid for Energy Storage Application
Authors: Anjali Vanpariya, Priyanka Marathey, Sakshum Khanna, Roma Patel, Indrajit Mukhopadhyay
Abstract:
Silicon (Si) is a promising negative electrode material for lithium-ion batteries (LiBs) due to its low cost, non-toxicity, and a high theoretical capacity of 4200 mAhg⁻¹. The primary challenge of the application of Si-based LiBs is large volume expansion (~ 300%) during the charge-discharge process. Incorporation of graphene, carbon nanotubes (CNTs), morphological control, and nanoparticles was utilized as effective strategies to tackle volume expansion issues. However, molten salt methods can resolve the issue, but high-temperature requirement limits its application. For sustainable and practical approach, room temperature (RT) based methods are essentially required. Use of ionic liquids (ILs) for electrodeposition of Si nanostructures can possibly resolve the issue of temperature as well as greener media. In this work, electrodeposition of Si nanoparticles on gold substrate was successfully carried out in the presence of ILs media, 1-butyl-3-methylimidazolium-bis (trifluoromethyl sulfonyl) imide (BMImTf₂N) at room temperature. Cyclic voltammetry (CV) suggests the sequential reduction of Si⁴⁺ to Si²⁺ and then Si nanoparticles (SiNs). The structure and morphology of the electrodeposited SiNs were investigated by FE-SEM and observed interconnected Si nanoparticles of average particle size ⁓100-200 nm. XRD and XPS data confirm the deposition of Si on Au (111). The first discharge-charge capacity of Si anode material has been found to be 1857 and 422 mAhg⁻¹, respectively, at current density 7.8 Ag⁻¹. The irreversible capacity of the first discharge-charge process can be attributed to the solid electrolyte interface (SEI) formation via electrolyte decomposition, and trapped Li⁺ inserted into the inner pores of Si. Pulverization of SiNs results in the creation of a new active site, which facilitates the formation of new SEI in the subsequent cycles leading to fading in a specific capacity. After 20 cycles, charge-discharge profiles have been stabilized, and a reversible capacity of 150 mAhg⁻¹ is retained. Electrochemical impedance spectroscopy (EIS) data shows the decrease in Rct value from 94.7 to 47.6 kΩ after 50 cycles of charge-discharge, which demonstrates the improvements of the interfacial charge transfer kinetics. The decrease in the Warburg impedance after 50 cycles of charge-discharge measurements indicates facile diffusion in fragmented and smaller Si nanoparticles. In summary, Si nanoparticles deposited on gold substrate using ILs as media and characterized well with different analytical techniques. Synthesized material was successfully utilized for LiBs application, which is well supported by CV and EIS data.Keywords: silicon nanoparticles, ionic liquid, electrodeposition, cyclic voltammetry, Li-ion battery
Procedia PDF Downloads 125623 Optimization of MAG Welding Process Parameters Using Taguchi Design Method on Dead Mild Steel
Authors: Tadele Tesfaw, Ajit Pal Singh, Abebaw Mekonnen Gezahegn
Abstract:
Welding is a basic manufacturing process for making components or assemblies. Recent welding economics research has focused on developing the reliable machinery database to ensure optimum production. Research on welding of materials like steel is still critical and ongoing. Welding input parameters play a very significant role in determining the quality of a weld joint. The metal active gas (MAG) welding parameters are the most important factors affecting the quality, productivity and cost of welding in many industrial operations. The aim of this study is to investigate the optimization process parameters for metal active gas welding for 60x60x5mm dead mild steel plate work-piece using Taguchi method to formulate the statistical experimental design using semi-automatic welding machine. An experimental study was conducted at Bishoftu Automotive Industry, Bishoftu, Ethiopia. This study presents the influence of four welding parameters (control factors) like welding voltage (volt), welding current (ampere), wire speed (m/min.), and gas (CO2) flow rate (lit./min.) with three different levels for variability in the welding hardness. The objective functions have been chosen in relation to parameters of MAG welding i.e., welding hardness in final products. Nine experimental runs based on an L9 orthogonal array Taguchi method were performed. An orthogonal array, signal-to-noise (S/N) ratio and analysis of variance (ANOVA) are employed to investigate the welding characteristics of dead mild steel plate and used in order to obtain optimum levels for every input parameter at 95% confidence level. The optimal parameters setting was found is welding voltage at 22 volts, welding current at 125 ampere, wire speed at 2.15 m/min and gas flow rate at 19 l/min by using the Taguchi experimental design method within the constraints of the production process. Finally, six conformations welding have been carried out to compare the existing values; the predicated values with the experimental values confirm its effectiveness in the analysis of welding hardness (quality) in final products. It is found that welding current has a major influence on the quality of welded joints. Experimental result for optimum setting gave a better hardness of welding condition than initial setting. This study is valuable for different material and thickness variation of welding plate for Ethiopian industries.Keywords: Weld quality, metal active gas welding, dead mild steel plate, orthogonal array, analysis of variance, Taguchi method
Procedia PDF Downloads 482622 Impact of Reproductive Technologies on Women's Lives in New Delhi: A Study from Feminist Perspective
Authors: Zairunisha
Abstract:
This paper is concerned with the ways in which Assisted Reproductive Technologies (ARTs) affect women’s lives and perceptions regarding their infertility, contraception and reproductive health. Like other female animals, nature has ordained human female with the biological potential of procreation and becoming mother. However, during the last few decades, this phenomenal disposition of women has become a technological affair to achieve fertility and contraception. Medical practices in patriarchal societies are governed by male scientists, technical and medical professionals who try to control women as procreator instead of providing them choices. The use of ARTs presents innumerable waxed ethical questions and issues such as: the place and role of a child in a woman’s life, freedom of women to make their choices related to use of ARTs, challenges and complexities women face at social and personal levels regarding use of ARTs, effect of ARTs on their life as mothers and other relationships. The paper is based on a survey study to explore and analyze the above ethical issues arising from the use of Assisted Reproductive Technologies (ARTs) by women in New Delhi, the capital of India. A rapid rate of increase in fertility clinics has been noticed recently. It is claimed that these clinics serve women by using ARTs procedures for infertile couples and individuals who want to have child or terminate a pregnancy. The study is an attempt to articulate a critique of ARTs from a feminist perspective. A qualitative feminist research methodology has been adopted for conducting the survey study. An attempt has been made to identify the ways in which a woman’s life is affected in terms of her perceptions, apprehensions, choices and decisions regarding new reproductive technologies. A sample of 18 women of New Delhi was taken to conduct in-depth interviews to investigate their perception and response concerning the use of ARTs with a focus on (i) successful use of ARTs, (ii) unsuccessful use of ARTs, (iii) use of ARTs in progress with results yet to be known. The survey was done to investigate the impact of ARTs on women’s physical, emotional, psychological conditions as well as on their social relations and choices. The complexities and challenges faced by women in the voluntary and involuntary (forced) use of ARTs in Delhi have been illustrated. A critical analysis of interviews revealed that these technologies are used and developed for making profits at the cost of women’s lives through which economically privileged women and individuals are able to purchase services from lesser ones. In this way, the amalgamation of technology and cultural traditions are redefining and re-conceptualising the traditional patterns of motherhood, fatherhood, kinship and family relations within the realm of new ways of reproduction introduced through the use of ARTs.Keywords: reproductive technologies, infertilities, voluntary, involuntary
Procedia PDF Downloads 373621 A Mixed-Method Study Exploring Expressive Writing as a Brief Intervention Targeting Mental Health and Wellbeing in Higher Education Students: A Focus on the Quantitative Findings
Authors: Gemma Reynolds, Deborah Bailey Rodriguez, Maria Paula Valdivieso Rueda
Abstract:
In recent years, the mental health of Higher Education (HE) students has been a growing concern. This has been further exacerbated by the stresses associated with the Covid-19 pandemic, placing students at even greater risk of developing mental health issues. Support available to students in HE tends to follow an established and traditional route. The demands for counselling services have grown, not only with the increase in student numbers but with the number of students seeking support for mental health issues. One way of improving well-being and mental health in HE students is through the use of brief interventions, such as expressive writing (EW). This intervention involves encouraging individuals to write continuously for at least 15-20 minutes for three to five sessions (often on consecutive days) about their deepest thoughts and feelings to explore significant personal experiences in a meaningful way. Given the brevity, simplicity and cost-effectiveness of EW, this intervention has considerable potential as an intervention for HE populations. The current study, therefore, employed a mixed-methods design to explore the effectiveness of EW in reducing anxiety, general stress, academic stress and depression in HE students while improving well-being. HE students at MDX were randomly assigned to one of three conditions: (1) The UniExp-EW group were required to write about their emotions and thoughts about any stressors they have faced that are directly relevant to their university experience (2) The NonUniExp-EW group were required to write about their emotions and thoughts about any stressors that are NOT directly relevant to their university experience, and (3) The Control group were required to write about how they spent their weekend, with no reference to thoughts or emotions, and without thinking about university. Participants were required to carry out the EW intervention for 15minutes per day for four consecutive days. Baseline mental health and wellbeing measures were taken before the intervention via a battery of standardised questionnaires. Following completion of the intervention on day four, participants were required to complete the questionnaires a second time and again one week later. Participants were also invited to attend focus groups to discuss their experience of the intervention. This will allow an in-depth investigation into students’ perceptions of EW as an effective intervention to determine whether they would choose to use this intervention in the future. The quantitative findings will be discussed at the conference as well as a discussion of the important implications of the findings. The study is fundamental because if EW is an effective intervention for improving mental health and well-being in HE students, its brevity and simplicity means it can be easily implemented and can be freely-available to students. Improving the mental health and well-being of HE students can have knock-on implications for improving academic skills and career development.Keywords: mental health, wellbeing, higher education students, expressive writing
Procedia PDF Downloads 88620 3D-Printing of Waveguide Terminations: Effect of Material Shape and Structuring on Their Characteristics
Authors: Lana Damaj, Vincent Laur, Azar Maalouf, Alexis Chevalier
Abstract:
Matched termination is an important part of the passive waveguide components. It is typically used at the end of a waveguide transmission line to prevent reflections and improve signal quality. Waveguide terminations (loads) are commonly used in microwave and RF applications. In traditional microwave architectures, usually, waveguide termination consists of a standard rectangular waveguide made by a lossy resistive material, and ended by shorting metallic plate. These types of terminations are used, to dissipate the energy as heat. However, these terminations may increase the size and the weight of the overall system. New alternative solution consists in developing terminations based on 3D-printing of materials. Designing such terminations is very challenging since it should meet the requirements imposed by the system. These requirements include many parameters such as the absorption, the power handling capability in addition to the cost, the size and the weight that have to be minimized. 3D-printing is a shaping process that enables the production of complex geometries. It allows to find best compromise between requirements. In this paper, a comparison study has been made between different existing and new shapes of waveguide terminations. Indeed, 3D printing of absorbers makes it possible to study not only standard shapes (wedge, pyramid, tongue) but also more complex topologies such as exponential ones. These shapes have been designed and simulated using CST MWS®. The loads have been printed using the carbon-filled PolyLactic Acid, conductive PLA from ProtoPasta. Since the terminations has been characterized in the X-band (from 8GHz to 12GHz), the rectangular waveguide standard WR-90 has been selected. The classical wedge shape has been used as a reference. First, all loads have been simulated with the same length and two parameters have been compared: the absorption level (level of |S11|) and the dissipated power density. This study shows that the concave exponential pyramidal shape has the better absorption level and the convex exponential pyramidal shape has the better dissipated power density level. These two loads have been printed in order to measure their properties. A good agreement between the simulated and measured reflection coefficient has been obtained. Furthermore, a study of material structuring based on the honeycomb hexagonal structure has been investigated in order to vary the effective properties. In the final paper, the detailed methodology and the simulated and measured results will be presented in order to show how 3D-printing can allow controlling mass, weight, absorption level and power behaviour.Keywords: additive manufacturing, electromagnetic composite materials, microwave measurements, passive components, power handling capacity (PHC), 3D-printing
Procedia PDF Downloads 22619 Cessna Citation X Business Aircraft Stability Analysis Using Linear Fractional Representation LFRs Model
Authors: Yamina Boughari, Ruxandra Mihaela Botez, Florian Theel, Georges Ghazi
Abstract:
Clearance of flight control laws of a civil aircraft is a long and expensive process in the Aerospace industry. Thousands of flight combinations in terms of speeds, altitudes, gross weights, centers of gravity and angles of attack have to be investigated, and proved to be safe. Nonetheless, in this method, a worst flight condition can be easily missed, and its missing would lead to a critical situation. Definitively, it would be impossible to analyze a model because of the infinite number of cases contained within its flight envelope, that might require more time, and therefore more design cost. Therefore, in industry, the technique of the flight envelope mesh is commonly used. For each point of the flight envelope, the simulation of the associated model ensures the satisfaction or not of specifications. In order to perform fast, comprehensive and effective analysis, other varying parameters models were developed by incorporating variations, or uncertainties in the nominal models, known as Linear Fractional Representation LFR models; these LFR models were able to describe the aircraft dynamics by taking into account uncertainties over the flight envelope. In this paper, the LFRs models are developed using the speeds and altitudes as varying parameters; The LFR models were built using several flying conditions expressed in terms of speeds and altitudes. The use of such a method has gained a great interest by the aeronautical companies that have seen a promising future in the modeling, and particularly in the design and certification of control laws. In this research paper, we will focus on the Cessna Citation X open loop stability analysis. The data are provided by a Research Aircraft Flight Simulator of Level D, that corresponds to the highest level flight dynamics certification; this simulator was developed by CAE Inc. and its development was based on the requirements of research at the LARCASE laboratory. The acquisition of these data was used to develop a linear model of the airplane in its longitudinal and lateral motions, and was further used to create the LFR’s models for 12 XCG /weights conditions, and thus the whole flight envelope using a friendly Graphical User Interface developed during this study. Then, the LFR’s models are analyzed using Interval Analysis method based upon Lyapunov function, and also the ‘stability and robustness analysis’ toolbox. The results were presented under the form of graphs, thus they have offered good readability, and were easily exploitable. The weakness of this method stays in a relatively long calculation, equal to about four hours for the entire flight envelope.Keywords: flight control clearance, LFR, stability analysis, robustness analysis
Procedia PDF Downloads 352618 Ground Improvement Using Deep Vibro Techniques at Madhepura E-Loco Project
Authors: A. Sekhar, N. Ramakrishna Raju
Abstract:
This paper is a result of ground improvement using deep vibro techniques with combination of sand and stone columns performed on a highly liquefaction susceptible site (70 to 80% sand strata and balance silt) with low bearing capacities due to high settlements located (earth quake zone V as per IS code) at Madhepura, Bihar state in northern part of India. Initially, it was envisaged with bored cast in-situ/precast piles, stone/sand columns. However, after detail analysis to address both liquefaction and improve bearing capacities simultaneously, it was analyzed the deep vibro techniques with combination of sand and stone columns is excellent solution for given site condition which may be first time in India. First after detail soil investigation, pre eCPT test was conducted to evaluate the potential depth of liquefaction to densify silty sandy soils to improve factor of safety against liquefaction. Then trail test were being carried out at site by deep vibro compaction technique with sand and stone columns combination with different spacings of columns in triangular shape with different timings during each lift of vibro up to ground level. Different spacings and timing was done to obtain the most effective spacing and timing with vibro compaction technique to achieve maximum densification of saturated loose silty sandy soils uniformly for complete treated area. Then again, post eCPT test and plate load tests were conducted at all trail locations of different spacings and timing of sand and stone columns to evaluate the best results for obtaining the required factor of safety against liquefaction and the desired bearing capacities with reduced settlements for construction of industrial structures. After reviewing these results, it was noticed that the ground layers are densified more than the expected with improved factor of safety against liquefaction and achieved good bearing capacities for a given settlements as per IS codal provisions. It was also worked out for cost-effectiveness of lightly loaded single storied structures by using deep vibro technique with sand column avoiding stone. The results were observed satisfactory for resting the lightly loaded foundations. In this technique, the most important is to mitigating liquefaction with improved bearing capacities and reduced settlements to acceptable limits as per IS: 1904-1986 simultaneously up to a depth of 19M. To our best knowledge it was executed first time in India.Keywords: ground improvement, deep vibro techniques, liquefaction, bearing capacity, settlement
Procedia PDF Downloads 197617 Folding of β-Structures via the Polarized Structure-Specific Backbone Charge (PSBC) Model
Authors: Yew Mun Yip, Dawei Zhang
Abstract:
Proteins are the biological machinery that executes specific vital functions in every cell of the human body by folding into their 3D structures. When a protein misfolds from its native structure, the machinery will malfunction and lead to misfolding diseases. Although in vitro experiments are able to conclude that the mutations of the amino acid sequence lead to incorrectly folded protein structures, these experiments are unable to decipher the folding process. Therefore, molecular dynamic (MD) simulations are employed to simulate the folding process so that our improved understanding of the folding process will enable us to contemplate better treatments for misfolding diseases. MD simulations make use of force fields to simulate the folding process of peptides. Secondary structures are formed via the hydrogen bonds formed between the backbone atoms (C, O, N, H). It is important that the hydrogen bond energy computed during the MD simulation is accurate in order to direct the folding process to the native structure. Since the atoms involved in a hydrogen bond possess very dissimilar electronegativities, the more electronegative atom will attract greater electron density from the less electronegative atom towards itself. This is known as the polarization effect. Since the polarization effect changes the electron density of the two atoms in close proximity, the atomic charges of the two atoms should also vary based on the strength of the polarization effect. However, the fixed atomic charge scheme in force fields does not account for the polarization effect. In this study, we introduce the polarized structure-specific backbone charge (PSBC) model. The PSBC model accounts for the polarization effect in MD simulation by updating the atomic charges of the backbone hydrogen bond atoms according to equations derived between the amount of charge transferred to the atom and the length of the hydrogen bond, which are calculated from quantum-mechanical calculations. Compared to other polarizable models, the PSBC model does not require quantum-mechanical calculations of the peptide simulated at every time-step of the simulation and maintains the dynamic update of atomic charges, thereby reducing the computational cost and time while accounting for the polarization effect dynamically at the same time. The PSBC model is applied to two different β-peptides, namely the Beta3s/GS peptide, a de novo designed three-stranded β-sheet whose structure is folded in vitro and studied by NMR, and the trpzip peptides, a double-stranded β-sheet where a correlation is found between the type of amino acids that constitute the β-turn and the β-propensity.Keywords: hydrogen bond, polarization effect, protein folding, PSBC
Procedia PDF Downloads 270616 Physicians’ Knowledge and Perception of Gene Profiling in Malaysia: A Pilot Study
Authors: Farahnaz Amini, Woo Yun Kin, Lazwani Kolandaiveloo
Abstract:
Availability of different genetic tests after completion of Human Genome Project increases the physicians’ responsibility to keep themselves update on the potential implementation of these genetic tests in their daily practice. However, due to numbers of barriers, still many of physicians are not either aware of these tests or are not willing to offer or refer their patients for genetic tests. This study was conducted an anonymous, cross-sectional, mailed-based survey to develop a primary data of Malaysian physicians’ level of knowledge and perception of gene profiling. Questionnaire had 29 questions. Total scores on selected questions were used to assess the level of knowledge. The highest possible score was 11. Descriptive statistics, one way ANOVA and chi-squared test was used for statistical analysis. Sixty three completed questionnaires was returned by 27 general practitioners (GPs) and 36 medical specialists. Responders’ age range from 24 to 55 years old (mean 30.2 ± 6.4). About 40% of the participants rated themselves as having poor level of knowledge in genetics in general whilst 60% believed that they have fair level of knowledge. However, almost half (46%) of the respondents felt that they were not knowledgeable about available genetic tests. A majority (94%) of the responders were not aware of any lab or company which is offering gene profiling services in Malaysia. Only 4% of participants were aware of using gene profiling for detection of dosage of some drugs. Respondents perceived greater utility of gene profiling for breast cancer (38%) compared to the colorectal familial cancer (3%). The score of knowledge ranged from 2 to 8 (mean 4.38 ± 1.67). Non-significant differences between score of knowledge of GPs and specialists were observed, with score of 4.19 and 4.58 respectively. There was no significant association between any demographic factors and level of knowledge. However, those who graduated between years 2001 to 2005 had higher level of knowledge. Overall, 83% of participants showed relatively high level of perception on value of gene profiling to detect patient’s risk of disease. However, low perception was observed for both statements of using gene profiling for general population in order to alter their lifestyle (25%) as well as having the full sequence of a patient genome for the purpose of determining a patient’s best match for treatment (18%). The lack of clinical guidelines, limited provider knowledge and awareness, lack of time and resources to educate patients, lack of evidence-based clinical information and cost of tests were the most barriers of ordering gene profiling mentioned by physicians. In conclusion Malaysian physicians who participate in this study had mediocre level of knowledge and awareness in gene profiling. The low exposure to the genetic questions and problems might be a key predictor of lack of awareness and knowledge on available genetic tests. Educational and training workshop might be useful in helping Malaysian physicians incorporate genetic profiling into practice for eligible patients.Keywords: gene profiling, knowledge, Malaysia, physician
Procedia PDF Downloads 326615 Supply Chain Analysis with Product Returns: Pricing and Quality Decisions
Authors: Mingming Leng
Abstract:
Wal-Mart has allocated considerable human resources for its quality assurance program, in which the largest retailer serves its supply chains as a quality gatekeeper. Asda Stores Ltd., the second largest supermarket chain in Britain, is now investing £27m in significantly increasing the frequency of quality control checks in its supply chains and thus enhancing quality across its fresh food business. Moreover, Tesco, the largest British supermarket chain, already constructed a quality assessment center to carry out its gatekeeping responsibility. Motivated by the above practices, we consider a supply chain in which a retailer plays the gatekeeping role in quality assurance by identifying defects among a manufacturer's products prior to selling them to consumers. The impact of a retailer's gatekeeping activity on pricing and quality assurance in a supply chain has not been investigated in the operations management area. We draw a number of managerial insights that are expected to help practitioners judiciously consider the quality gatekeeping effort at the retail level. As in practice, when the retailer identifies a defective product, she immediately returns it to the manufacturer, who then replaces the defect with a good quality product and pays a penalty to the retailer. If the retailer does not recognize a defect but sells it to a consumer, then the consumer will identify the defect and return it to the retailer, who then passes the returned 'unidentified' defect to the manufacturer. The manufacturer also incurs a penalty cost. Accordingly, we analyze a two-stage pricing and quality decision problem, in which the manufacturer and the retailer bargain over the manufacturer's average defective rate and wholesale price at the first stage, and the retailer decides on her optimal retail price and gatekeeping intensity at the second stage. We also compare the results when the retailer performs quality gatekeeping with those when the retailer does not. Our supply chain analysis exposes some important managerial insights. For example, the retailer's quality gatekeeping can effectively reduce the channel-wide defective rate, if her penalty charge for each identified de-fect is larger than or equal to the market penalty for each unidentified defect. When the retailer imple-ments quality gatekeeping, the change in the negotiated wholesale price only depends on the manufac-turer's 'individual' benefit, and the change in the retailer's optimal retail price is only related to the channel-wide benefit. The retailer is willing to take on the quality gatekeeping responsibility, when the impact of quality relative to retail price on demand is high and/or the retailer has a strong bargaining power. We conclude that the retailer's quality gatekeeping can help reduce the defective rate for consumers, which becomes more significant when the retailer's bargaining position in her supply chain is stronger. Retailers with stronger bargaining powers can benefit more from their quality gatekeeping in supply chains.Keywords: bargaining, game theory, pricing, quality, supply chain
Procedia PDF Downloads 279614 Enhancing VR Exposure Therapy for the Treatment of Phobias with the Use of Photorealistic VR Environments and Stimuli, and the Use of Tactile Feedback Suits and Responsive Systems
Authors: Vardan Melkonyan, Arman Azizyan, Astghik Boyajyan
Abstract:
Virtual reality (VR) exposure therapy is a form of cognitive-behavioral therapy that uses immersive virtual environments to expose individuals to the feared stimuli or situations that trigger their phobia. VR exposure therapy has become an increasingly popular treatment for phobias, including fear of heights, public speaking, and flying, due to its ability to provide a controlled and safe environment for individuals to confront their fears while also allowing therapists to tailor the virtual exposure to the specific needs and goals of each individual. It is also a cost-effective and accessible treatment option, as it can be delivered remotely and does not require the use of drugs. Overall, VR exposure therapy has the potential to be a valuable tool for therapists in the treatment of phobias. But current methods may be improved by incorporating advanced technology such as photorealistic VR environments, tactile feedback suits, and responsive systems. The aim of this study was to identify the most effective approach for enhancing VR exposure therapy for the treatment of phobias. Photorealistic VR environments and stimuli can greatly enhance the effectiveness of VR exposure therapy for the treatment of phobias. By creating immersive, realistic virtual environments that closely mimic the real-life situations that trigger phobia responses, patients are able to more fully engage in the therapeutic process and confront their fears in a controlled and safe manner. This can help to reduce the severity of phobia symptoms and increase treatment outcomes. The use of tactile feedback suits and responsive systems can further enhance the VR exposure therapy experience by adding a physical element to the virtual environment. These suits, which can mimic the sensations of touch, pressure, and movement, allow patients to fully immerse themselves in the virtual world and feel as if they are physically present in the situation. This can help to increase the realism of the virtual environment and make it more effective in reducing phobia symptoms. Additionally, responsive systems can be used to trigger specific events or responses within the virtual environment based on the patient's actions, providing a more interactive and personalized treatment experience. A comprehensive literature review was conducted, including studies on VR exposure therapy for phobias and the use of advanced technology to enhance the therapy. Results indicate that incorporating these enhancements may significantly increase the effectiveness of VR exposure therapy for phobias. Further research is needed to fully understand the potential of these enhancements and to determine the optimal combination and implementation.Keywords: virtual reality, mental health, phobias, fears, treatment, photorealistic, immersive, phobia
Procedia PDF Downloads 89