Search results for: short integer solution (SIS) problem
1023 Numerical Investigation of the Boundary Conditions at Liquid-Liquid Interfaces in the Presence of Surfactants
Authors: Bamikole J. Adeyemi, Prashant Jadhawar, Lateef Akanji
Abstract:
Liquid-liquid interfacial flow is an important process that has applications across many spheres. One such applications are residual oil mobilization, where crude oil and low salinity water are emulsified due to lowered interfacial tension under the condition of low shear rates. The amphiphilic components (asphaltenes and resins) in crude oil are considered to assemble at the interface between the two immiscible liquids. To justify emulsification, drag and snap-off suppression as the main effects of low salinity water, mobilization of residual oil is visualized as thickening and slip of the wetting phase at the brine/crude oil interface which results in the squeezing and drag of the non-wetting phase to the pressure sinks. Meanwhile, defining the boundary conditions for such a system can be very challenging since the interfacial dynamics do not only depend on interfacial tension but also the flow rate. Hence, understanding the flow boundary condition at the brine/crude oil interface is an important step towards defining the influence of low salinity water composition on residual oil mobilization. This work presents a numerical evaluation of three slip boundary conditions that may apply at liquid-liquid interfaces. A mathematical model was developed to describe the evolution of a viscoelastic interfacial thin liquid film. The base model is developed by the asymptotic expansion of the full Navier-Stokes equations for fluid motion due to gradients of surface tension. This model was upscaled to describe the dynamics of the film surface deformation. Subsequently, Jeffrey’s model was integrated into the formulations to account for viscoelastic stress within a long wave approximation of the Navier-Stokes equations. To study the fluid response to a prescribed disturbance, a linear stability analysis (LSA) was performed. The dispersion relation and the corresponding characteristic equation for the growth rate were obtained. Three slip (slip, 1; locking, -1; and no-slip, 0) boundary conditions were examined using the resulted characteristic equation. Also, the dynamics of the evolved interfacial thin liquid film were numerically evaluated by considering the influence of the boundary conditions. The linear stability analysis shows that the boundary conditions of such systems are greatly impacted by the presence of amphiphilic molecules when three different values of interfacial tension were tested. The results for slip and locking conditions are consistent with the fundamental solution representation of the diffusion equation where there is film decay. The interfacial films at both boundary conditions respond to exposure time in a similar manner with increasing growth rate which resulted in the formation of more droplets with time. Contrarily, no-slip boundary condition yielded an unbounded growth and it is not affected by interfacial tension.Keywords: boundary conditions, liquid-liquid interfaces, low salinity water, residual oil mobilization
Procedia PDF Downloads 1261022 Preparation and Characterization of Poly(L-Lactic Acid)/Oligo(D-Lactic Acid) Grafted Cellulose Composites
Authors: Md. Hafezur Rahaman, Mohd. Maniruzzaman, Md. Shadiqul Islam, Md. Masud Rana
Abstract:
With the growth of environmental awareness, enormous researches are running to develop the next generation materials based on sustainability, eco-competence, and green chemistry to preserve and protect the environment. Due to biodegradability and biocompatibility, poly (L-lactic acid) (PLLA) has a great interest in ecological and medical applications. Also, cellulose is one of the most abundant biodegradable, renewable polymers found in nature. It has several advantages such as low cost, high mechanical strength, biodegradability and so on. Recently, an immense deal of attention has been paid for the scientific and technological development of α-cellulose based composite material. PLLA could be used for grafting of cellulose to improve the compatibility prior to the composite preparation. Here it is quite difficult to form a bond between lower hydrophilic molecules like PLLA and α-cellulose. Dimmers and oligomers can easily be grafted onto the surface of the cellulose by ring opening or polycondensation method due to their low molecular weight. In this research, α-cellulose extracted from jute fiber is grafted with oligo(D-lactic acid) (ODLA) via graft polycondensation reaction in presence of para-toluene sulphonic acid and potassium persulphate in toluene at 130°C for 9 hours under 380 mmHg. Here ODLA is synthesized by ring opening polymerization of D-lactides in the presence of stannous octoate (0.03 wt% of lactide) and D-lactic acids at 140°C for 10 hours. Composites of PLLA with ODLA grafted α-cellulose are prepared by solution mixing and film casting method. Confirmation of grafting was carried out through FTIR spectroscopy and SEM analysis. A strongest carbonyl peak of FTIR spectroscopy at 1728 cm⁻¹ of ODLA grafted α-cellulose confirms the grafting of ODLA onto α-cellulose which is absent in α-cellulose. It is also observed from SEM photographs that there are some white areas (spot) on ODLA grafted α-cellulose as compared to α-cellulose may indicate the grafting of ODLA and consistent with FTIR results. Analysis of the composites is carried out by FTIR, SEM, WAXD and thermal gravimetric analyzer. Most of the FTIR characteristic absorption peak of the composites shifted to higher wave number with increasing peak area may provide a confirmation that PLLA and grafted cellulose have better compatibility in composites via intermolecular hydrogen bonding and this supports previously published results. Grafted α-cellulose distributions in composites are uniform which is observed by SEM analysis. WAXD studied show that only homo-crystalline structures of PLLA present in the composites. Thermal stability of the composites is enhanced with increasing the percentages of ODLA grafted α-cellulose in composites. As a consequence, the resultant composites have a resistance toward the thermal degradation. The effects of length of the grafted chain and biodegradability of the composites will be studied in further research.Keywords: α-cellulose, composite, graft polycondensation, oligo(D-lactic acid), poly(L-lactic acid)
Procedia PDF Downloads 1161021 Flow Visualization and Mixing Enhancement in Y-Junction Microchannel with 3D Acoustic Streaming Flow Patterns Induced by Trapezoidal Triangular Structure using High-Viscous Liquids
Authors: Ayalew Yimam Ali
Abstract:
The Y-shaped microchannel system is used to mix up low or high viscosities of different fluids, and the laminar flow with high-viscous water-glycerol fluids makes the mixing at the entrance Y-junction region a challenging issue. Acoustic streaming (AS) is time-average, a steady second-order flow phenomenon that could produce rolling motion in the microchannel by oscillating low-frequency range acoustic transducer by inducing acoustic wave in the flow field is the promising strategy to enhance diffusion mass transfer and mixing performance in laminar flow phenomena. In this study, the 3D trapezoidal Structure has been manufactured with advanced CNC machine cutting tools to produce the molds of trapezoidal structure with the 3D sharp edge tip angles of 30° and 0.3mm spine sharp-edge tip depth from PMMA glass (Polymethylmethacrylate) and the microchannel has been fabricated using PDMS (Polydimethylsiloxane) which could be grown-up longitudinally in Y-junction microchannel mixing region top surface to visualized 3D rolling steady acoustic streaming and mixing performance evaluation using high-viscous miscible fluids. The 3D acoustic streaming flow patterns and mixing enhancement were investigated using the micro-particle image velocimetry (μPIV) technique with different spine depth lengths, channel widths, high volume flow rates, oscillation frequencies, and amplitude. The velocity and vorticity flow fields show that a pair of 3D counter-rotating streaming vortices were created around the trapezoidal spine structure and observing high vorticity maps up to 8 times more than the case without acoustic streaming in Y-junction with the high-viscosity water-glycerol mixture fluids. The mixing experiments were performed by using fluorescent green dye solution with de-ionized water on one inlet side, de-ionized water-glycerol with different mass-weight percentage ratios on the other inlet side of the Y-channel and evaluated its performance with the degree of mixing at different amplitudes, flow rates, frequencies, and spine sharp-tip edge angles using the grayscale value of pixel intensity with MATLAB Software. The degree of mixing (M) characterized was found to significantly improved to 0.96.8% with acoustic streaming from 67.42% without acoustic streaming, in the case of 0.0986 μl/min flow rate, 12kHz frequency and 40V oscillation amplitude at y = 2.26 mm. The results suggested the creation of a new 3D steady streaming rolling motion with a high volume flow rate around the entrance junction mixing region, which promotes the mixing of two similar high-viscosity fluids inside the microchannel, which is unable to mix by the laminar flow with low viscous conditions.Keywords: nano fabrication, 3D acoustic streaming flow visualization, micro-particle image velocimetry, mixing enhancement
Procedia PDF Downloads 311020 Breast Cancer Incidence Estimation in Castilla-La Mancha (CLM) from Mortality and Survival Data
Authors: C. Romero, R. Ortega, P. Sánchez-Camacho, P. Aguilar, V. Segur, J. Ruiz, G. Gutiérrez
Abstract:
Introduction: Breast cancer is a leading cause of death in CLM. (2.8% of all deaths in women and 13,8% of deaths from tumors in womens). It is the most tumor incidence in CLM region with 26.1% from all tumours, except nonmelanoma skin (Cancer Incidence in Five Continents, Volume X, IARC). Cancer registries are a good information source to estimate cancer incidence, however the data are usually available with a lag which makes difficult their use for health managers. By contrast, mortality and survival statistics have less delay. In order to serve for resource planning and responding to this problem, a method is presented to estimate the incidence of mortality and survival data. Objectives: To estimate the incidence of breast cancer by age group in CLM in the period 1991-2013. Comparing the data obtained from the model with current incidence data. Sources: Annual number of women by single ages (National Statistics Institute). Annual number of deaths by all causes and breast cancer. (Mortality Registry CLM). The Breast cancer relative survival probability. (EUROCARE, Spanish registries data). Methods: A Weibull Parametric survival model from EUROCARE data is obtained. From the model of survival, the population and population data, Mortality and Incidence Analysis MODel (MIAMOD) regression model is obtained to estimate the incidence of cancer by age (1991-2013). Results: The resulting model is: Ix,t = Logit [const + age1*x + age2*x2 + coh1*(t – x) + coh2*(t-x)2] Where: Ix,t is the incidence at age x in the period (year) t; the value of the parameter estimates is: const (constant term in the model) = -7.03; age1 = 3.31; age2 = -1.10; coh1 = 0.61 and coh2 = -0.12. It is estimated that in 1991 were diagnosed in CLM 662 cases of breast cancer (81.51 per 100,000 women). An estimated 1,152 cases (112.41 per 100,000 women) were diagnosed in 2013, representing an increase of 40.7% in gross incidence rate (1.9% per year). The annual average increases in incidence by age were: 2.07% in women aged 25-44 years, 1.01% (45-54 years), 1.11% (55-64 years) and 1.24% (65-74 years). Cancer registries in Spain that send data to IARC declared 2003-2007 the average annual incidence rate of 98.6 cases per 100,000 women. Our model can obtain an incidence of 100.7 cases per 100,000 women. Conclusions: A sharp and steady increase in the incidence of breast cancer in the period 1991-2013 is observed. The increase was seen in all age groups considered, although it seems more pronounced in young women (25-44 years). With this method you can get a good estimation of the incidence.Keywords: breast cancer, incidence, cancer registries, castilla-la mancha
Procedia PDF Downloads 3101019 Effect of Phenolic Acids on Human Saliva: Evaluation by Diffusion and Precipitation Assays on Cellulose Membranes
Authors: E. Obreque-Slier, F. Orellana-Rodríguez, R. López-Solís
Abstract:
Phenolic compounds are secondary metabolites present in some foods, such as wine. Polyphenols comprise two main groups: flavonoids (anthocyanins, flavanols, and flavonols) and non-flavonoids (stilbenes and phenolic acids). Phenolic acids are low molecular weight non flavonoid compounds that are usually grouped into benzoic (gallic, vanillinic and protocatechuic acids) and cinnamic acids (ferulic, p-coumaric and caffeic acids). Likewise, tannic acid is an important polyphenol constituted mainly by gallic acid. Phenolic compounds are responsible for important properties in foods and drinks, such as color, aroma, bitterness, and astringency. Astringency is a drying, roughing, and sometimes puckering sensation that is experienced on the various oral surfaces during or immediately after tasting foods. Astringency perception has been associated with interactions between flavanols present in some foods and salivary proteins. Despite the quantitative relevance of phenolic acids in food and beverages, there is no information about its effect on salivary proteins and consequently on the sensation of astringency. The objective of this study was assessed the interaction of several phenolic acids (gallic, vanillinic, protocatechuic, ferulic, p-coumaric and caffeic acids) with saliva. Tannic acid was used as control. Thus, solutions of each phenolic acids (5 mg/mL) were mixed with human saliva (1:1 v/v). After incubation for 5 min at room temperature, 15-μL aliquots of the mixtures were dotted on a cellulose membrane and allowed to diffuse. The dry membrane was fixed in 50 g/L trichloroacetic acid, rinsed in 800 mL/L ethanol and stained for protein with Coomassie blue for 20 min, destained with several rinses of 73 g/L acetic acid and dried under a heat lamp. Both diffusion area and stain intensity of the protein spots were semiqualitative estimates for protein-tannin interaction (diffusion test). The rest of the whole saliva-phenol solution mixtures of the diffusion assay were centrifuged and fifteen-μL aliquots of each supernatant were dotted on a cellulose membrane, allowed to diffuse and processed for protein staining, as indicated above. In this latter assay, reduced protein staining was taken as indicative of protein precipitation (precipitation test). The diffusion of the salivary protein was restricted by the presence of each phenolic acids (anti-diffusive effect), while tannic acid did not alter diffusion of the salivary protein. By contrast, phenolic acids did not provoke precipitation of the salivary protein, while tannic acid produced precipitation of salivary proteins. In addition, binary mixtures (mixtures of two components) of various phenolic acids with gallic acid provoked a restriction of saliva. Similar effect was observed by the corresponding individual phenolic acids. Contrary, binary mixtures of phenolic acid with tannic acid, as well tannic acid alone, did not affect the diffusion of the saliva but they provoked an evident precipitation. In summary, phenolic acids showed a relevant interaction with the salivary proteins, thus suggesting that these wine compounds can also contribute to the sensation of astringency.Keywords: astringency, polyphenols, tannins, tannin-protein interaction
Procedia PDF Downloads 2451018 Ytterbium Advantages for Brachytherapy
Authors: S. V. Akulinichev, S. A. Chaushansky, V. I. Derzhiev
Abstract:
High dose rate (HDR) brachytherapy is a method of contact radiotherapy, when a single sealed source with an activity of about 10 Ci is temporarily inserted in the tumor area. The isotopes Ir-192 and (much less) Co-60 are used as active material for such sources. The other type of brachytherapy, the low dose rate (LDR) brachytherapy, implies the insertion of many permanent sources (up to 200) of lower activity. The pulse dose rate (PDR) brachytherapy can be considered as a modification of HDR brachytherapy, when the single source is repeatedly introduced in the tumor region in a pulse regime during several hours. The PDR source activity is of the order of one Ci and the isotope Ir-192 is currently used for these sources. The PDR brachytherapy is well recommended for the treatment of several tumors since, according to oncologists, it combines the medical benefits of both HDR and LDR types of brachytherapy. One of the main problems for the PDR brachytherapy progress is the shielding of the treatment area since the longer stay of patients in a shielded canyon is not enough comfortable for them. The use of Yb-169 as an active source material is the way to resolve the shielding problem for PDR, as well as for HRD brachytherapy. The isotope Yb-169 has the average photon emission energy of 93 KeV and the half-life of 32 days. Compared to iridium and cobalt, this isotope has a significantly lower emission energy and therefore requires a much lighter shielding. Moreover, the absorption cross section of different materials has a strong Z-dependence in that photon energy range. For example, the dose distributions of iridium and ytterbium have a quite similar behavior in the water or in the body. But the heavier material as lead absorbs the ytterbium radiation much stronger than the iridium or cobalt radiation. For example, only 2 mm of lead layer is enough to reduce the ytterbium radiation by a couple of orders of magnitude but is not enough to protect from iridium radiation. We have created an original facility to produce the start stable isotope Yb-168 using the laser technology AVLIS. This facility allows to raise the Yb-168 concentration up to 50 % and consumes much less of electrical power than the alternative electromagnetic enrichment facilities. We also developed, in cooperation with the Institute of high pressure physics of RAS, a new technology for manufacturing high-density ceramic cores of ytterbium oxide. Ceramics density reaches the limit of the theoretical values: 9.1 g/cm3 for the cubic phase of ytterbium oxide and 10 g/cm3 for the monoclinic phase. Source cores from this ceramics have high mechanical characteristics and a glassy surface. The use of ceramics allows to increase the source activity with fixed external dimensions of sources.Keywords: brachytherapy, high, pulse dose rates, radionuclides for therapy, ytterbium sources
Procedia PDF Downloads 4901017 Bioefficiency of Cinnamomum verum Loaded Niosomes and Its Microbicidal and Mosquito Larvicidal Activity against Aedes aegypti, Anopheles stephensi and Culex quinquefasciatus
Authors: Aasaithambi Kalaiselvi, Michael Gabriel Paulraj, Ekambaram Nakkeeran
Abstract:
Emergences of mosquito vector-borne diseases are considered as a perpetual problem globally in tropical countries. The outbreak of several diseases such as chikungunya, zika virus infection and dengue fever has created a massive threat towards the living population. Frequent usage of synthetic insecticides like Dichloro Diphenyl Trichloroethane (DDT) eventually had its adverse harmful effects on humans as well as the environment. Since there are no perennial vaccines, prevention, treatment or drugs available for these pathogenic vectors, WHO is more concerned in eradicating their breeding sites effectively without any side effects on humans and environment by approaching plant-derived natural eco-friendly bio-insecticides. The aim of this study is to investigate the larvicidal potency of Cinnamomum verum essential oil (CEO) loaded niosomes. Cholesterol and surfactant variants of Span 20, 60 and 80 were used in synthesizing CEO loaded niosomes using Transmembrane pH gradient method. The synthesized CEO loaded niosomes were characterized by Zeta potential, particle size, Fourier Transform Infrared Spectroscopy (FT-IR), GC-MS and SEM analysis to evaluate charge, size, functional properties, the composition of secondary metabolites and morphology. The Z-average size of the formed niosomes was 1870.84 nm and had good stability with zeta potential -85.3 meV. The entrapment efficiency of the CEO loaded niosomes was determined by UV-Visible Spectrophotometry. The bio-potency of CEO loaded niosomes was treated and assessed against gram-positive (Bacillus subtilis) and gram-negative (Escherichia coli) bacteria and fungi (Aspergillus fumigatus and Candida albicans) at various concentrations. The larvicidal activity was evaluated against II to IV instar larvae of Aedes aegypti, Anopheles stephensi and Culex quinquefasciatus at various concentrations for 24 h. The mortality rate of LC₅₀ and LC₉₀ values were calculated. The results exhibited that CEO loaded niosomes have greater efficiency against mosquito larvicidal activity. The results suggest that niosomes could be used in various applications of biotechnology and drug delivery systems with greater stability by altering the drug of interest.Keywords: Cinnamomum verum, niosomes, entrapment efficiency, bactericidal and fungicidal, mosquito larvicidal activity
Procedia PDF Downloads 1631016 Evidence-Based in Telemonitoring of Users with Pacemakers at Five Years after Implant: The Poniente Study
Authors: Antonio Lopez-Villegas, Daniel Catalan-Matamoros, Remedios Lopez-Liria
Abstract:
Objectives: The purpose of this study was to analyze clinical data, health-related quality of life (HRQoL) and functional capacity of patients using a telemonitoring follow-up system (TM) compared to patients followed-up through standard outpatient visits (HM) 5 years after the implantation of a pacemaker. Methods: This is a controlled, non-randomised, nonblinded clinical trial, with data collection carried out at 5 years after the pacemakers implant. The study was developed at Hospital de Poniente (Almeria, Spain), between October 2012 and November 2013. The same clinical outcomes were analyzed in both follow-up groups. Health-Related Quality of Life and Functional Capacity was assessed through EuroQol-5D (EQ-5D) questionnaire and Duke Activity Status Index (DASI) respectively. Sociodemographic characteristics and clinical data were also analyzed. Results: 5 years after pacemaker implant, 55 of 82 initial patients finished the study. Users with pacemakers were assigned to either a conventional follow-up group at hospital (HM=34, 50 initials) or a telemonitoring system group (TM=21, 32 initials). No significant differences were found between both groups according to sociodemographic characteristics, clinical data, Health-Related Quality of Life and Functional Capacity according to medical record and EQ5D and DASI questionnaires. In addition, conventional follow-up visits to hospital were reduced in 44,84% (p < 0,001) in the telemonitoring group in relation to hospital monitoring group. Conclusion: Results obtained in this study suggest that the telemonitoring of users with pacemakers is an equivalent option to conventional follow-up at hospital, in terms of Health-Related Quality of Life and Functional Capacity. Furthermore, it allows for the early detection of cardiovascular and pacemakers-related problem events and significantly reduces the number of in-hospital visits. Trial registration: ClinicalTrials.gov NCT02234245. The PONIENTE study has been funded by the General Secretariat for Research, Development and Innovation, Regional Government of Andalusia (Spain), project reference number PI/0256/2017, under the research call 'Development and Innovation Projects in the Field of Biomedicine and Health Sciences', 2017.Keywords: cardiovascular diseases, health-related quality of life, pacemakers follow-up, remote monitoring, telemedicine
Procedia PDF Downloads 1241015 Climate Change Impact on Water Resources Management in Remote Islands Using Hybrid Renewable Energy Systems
Authors: Elissavet Feloni, Ioannis Kourtis, Konstantinos Kotsifakis, Evangelos Baltas
Abstract:
Water inadequacy in small dry islands scattered in the Aegean Sea (Greece) is a major problem regarding Water Resources Management (WRM), especially during the summer period due to tourism. In the present work, various WRM schemes are designed and presented. The WRM schemes take into account current infrastructure and include Rainwater Harvesting tanks and Reverse Osmosis Desalination Units. The energy requirements are covered mainly by wind turbines and/or a seawater pumped storage system. Sizing is based on the available data for population and tourism per island, after taking into account a slight increase in the population (up to 1.5% per year), and it guarantees at least 80% reliability for the energy supply and 99.9% for potable water. Evaluation of scenarios is carried out from a financial perspective, after calculating the Life Cycle Cost (LCC) of each investment for a lifespan of 30 years. The wind-powered desalination plant was found to be the most cost-effective practice, from an economic point of view. Finally, in order to estimate the Climate Change (CC) impact, six different CC scenarios were investigated. The corresponding rate of on-grid versus off-grid energy required for ensuring the targeted reliability for the zero and each climatic scenario was investigated per island. The results revealed that under CC the grid-on energy required would increase and as a result, the reduction in wind turbines and seawater pumped storage systems’ reliability will be in the range of 4 to 44%. However, the range of this percentage change does not exceed 22% per island for all examined CC scenarios. Overall, CC is proposed to be incorporated into the design process for WRM-related projects. Acknowledgements: This research is co-financed by Greece and the European Union (European Social Fund - ESF) through the Operational Program «Human Resources Development, Education and Lifelong Learning 2014-2020» in the context of the project “Development of a combined rain harvesting and renewable energy-based system for covering domestic and agricultural water requirements in small dry Greek Islands” (MIS 5004775).Keywords: small dry islands, water resources management, climate change, desalination, RES, seawater pumped storage system, rainwater harvesting
Procedia PDF Downloads 1141014 Recycling the Lanthanides from Permanent Magnets by Electrochemistry in Ionic Liquid
Authors: Celine Bonnaud, Isabelle Billard, Nicolas Papaiconomou, Eric Chainet
Abstract:
Thanks to their high magnetization and low mass, permanent magnets (NdFeB and SmCo) have quickly became essential for new energies (wind turbines, electrical vehicles…). They contain large quantities of neodymium, samarium and dysprosium, that have been recently classified as critical elements and that therefore need to be recycled. Electrochemical processes including electrodissolution followed by electrodeposition are an elegant and environmentally friendly solution for the recycling of such lanthanides contained in permanent magnets. However, electrochemistry of the lanthanides is a real challenge as their standard potentials are highly negative (around -2.5V vs ENH). Consequently, non-aqueous solvents are required. Ionic liquids (IL) are novel electrolytes exhibiting physico-chemical properties that fulfill many requirements of the sustainable chemistry principles, such as extremely low volatility and non-flammability. Furthermore, their chemical and electrochemical properties (solvation of metallic ions, large electrochemical windows, etc.) render them very attractive media to implement alternative and sustainable processes in view of integrated processes. All experiments that will be presented were carried out using butyl-methylpyrrolidinium bis(trifluoromethanesulfonyl)imide. Linear sweep, cyclic voltammetry and potentiostatic electrochemical techniques were used. The reliability of electrochemical experiments, performed without glove box, for the classic three electrodes cell used in this study has been assessed. Deposits were obtained by chronoamperometry and were characterized by scanning electron microscopy and energy-dispersive X-ray spectroscopy. The IL cathodic behavior under different constraints (argon, nitrogen, oxygen atmosphere or water content) and using several electrode materials (Pt, Au, GC) shows that with argon gas flow and gold as a working electrode, the cathodic potential can reach the maximum value of -3V vs Fc+/Fc; thus allowing a possible reduction of lanthanides. On a gold working electrode, the reduction potential of samarium and neodymium was found to be -1.8V vs Fc+/Fc while that of dysprosium was -2.1V vs Fc+/Fc. The individual deposits obtained were found to be porous and presented some significant amounts of C, N, F, S and O atoms. Selective deposition of neodymium in presence of dysprosium was also studied and will be discussed. Next, metallic Sm, Nd and Dy electrodes were used in replacement of Au, which induced changes in the reduction potential values and the deposit structures of lanthanides. The individual corrosion potentials were also measured in order to determine the parameters influencing the electrodissolution of these metals. Finally, a full recycling process was investigated. Electrodissolution of a real permanent magnet sample was monitored kinetically. Then, the sequential electrodeposition of all lanthanides contained in the IL was investigated. Yields, quality of the deposits and consumption of chemicals will be discussed in depth, in view of the industrial feasibility of this process for real permanent magnets recycling.Keywords: electrodeposition, electrodissolution, ionic liquids, lanthanides, rcycling
Procedia PDF Downloads 2721013 Performance Improvement of Piston Engine in Aeronautics by Means of Additive Manufacturing Technologies
Authors: G. Andreutti, G. Saccone, D. Lucariello, C. Pirozzi, S. Franchitti, R. Borrelli, C. Toscano, P. Caso, G. Ferraro, C. Pascarella
Abstract:
The reduction of greenhouse gases and pollution emissions is a worldwide environmental issue. The amount of CO₂ released by an aircraft is associated with the amount of fuel burned, so the improvement of engine thermo-mechanical efficiency and specific fuel consumption is a significant technological driver for aviation. Moreover, with the prospect that avgas will be phased out, an engine able to use more available and cheaper fuels is an evident advantage. An advanced aeronautical Diesel engine, because of its high efficiency and ability to use widely available and low-cost jet and diesel fuels, is a promising solution to achieve a more fuel-efficient aircraft. On the other hand, a Diesel engine has generally a higher overall weight, if compared with a gasoline one of same power performances. Fixing the MTOW, Max Take-Off Weight, and the operational payload, this extra-weight reduces the aircraft fuel fraction, partially vinifying the associated benefits. Therefore, an effort in weight saving manufacturing technologies is likely desirable. In this work, in order to achieve the mentioned goals, innovative Electron Beam Melting – EBM, Additive Manufacturing – AM technologies were applied to a two-stroke, common rail, GF56 Diesel engine, developed by the CMD Company for aeronautic applications. For this purpose, a consortium of academic, research and industrial partners, including CMD Company, Italian Aerospace Research Centre – CIRA, University of Naples Federico II and the University of Salerno carried out a technological project, funded by the Italian Minister of Education and Research – MIUR. The project aimed to optimize the baseline engine in order to improve its performance and increase its airworthiness features. This project was focused on the definition, design, development, and application of enabling technologies for performance improvement of GF56. Weight saving of this engine was pursued through the application of EBM-AM technologies and in particular using Arcam AB A2X machine, available at CIRA. The 3D printer processes titanium alloy micro-powders and it was employed to realize new connecting rods of the GF56 engine with an additive-oriented design approach. After a preliminary investigation of EBM process parameters and a thermo-mechanical characterization of titanium alloy samples, additive manufactured, innovative connecting rods were fabricated. These engine elements were structurally verified, topologically optimized, 3D printed and suitably post-processed. Finally, the overall performance improvement, on a typical General Aviation aircraft, was estimated, substituting the conventional engine with the optimized GF56 propulsion system.Keywords: aeronautic propulsion, additive manufacturing, performance improvement, weight saving, piston engine
Procedia PDF Downloads 1411012 Quantitative, Preservative Methodology for Review of Interview Transcripts Using Natural Language Processing
Authors: Rowan P. Martnishn
Abstract:
During the execution of a National Endowment of the Arts grant, approximately 55 interviews were collected from professionals across various fields. These interviews were used to create deliverables – historical connections for creations that began as art and evolved entirely into computing technology. With dozens of hours’ worth of transcripts to be analyzed by qualitative coders, a quantitative methodology was created to sift through the documents. The initial step was to both clean and format all the data. First, a basic spelling and grammar check was applied, as well as a Python script for normalized formatting which used an open-source grammatical formatter to make the data as coherent as possible. 10 documents were randomly selected to manually review, where words often incorrectly translated during the transcription were recorded and replaced throughout all other documents. Then, to remove all banter and side comments, the transcripts were spliced into paragraphs (separated by change in speaker) and all paragraphs with less than 300 characters were removed. Secondly, a keyword extractor, a form of natural language processing where significant words in a document are selected, was run on each paragraph for all interviews. Every proper noun was put into a data structure corresponding to that respective interview. From there, a Bidirectional and Auto-Regressive Transformer (B.A.R.T.) summary model was then applied to each paragraph that included any of the proper nouns selected from the interview. At this stage the information to review had been sent from about 60 hours’ worth of data to 20. The data was further processed through light, manual observation – any summaries which proved to fit the criteria of the proposed deliverable were selected, as well their locations within the document. This narrowed that data down to about 5 hours’ worth of processing. The qualitative researchers were then able to find 8 more connections in addition to our previous 4, exceeding our minimum quota of 3 to satisfy the grant. Major findings of the study and subsequent curation of this methodology raised a conceptual finding crucial to working with qualitative data of this magnitude. In the use of artificial intelligence there is a general trade off in a model between breadth of knowledge and specificity. If the model has too much knowledge, the user risks leaving out important data (too general). If the tool is too specific, it has not seen enough data to be useful. Thus, this methodology proposes a solution to this tradeoff. The data is never altered outside of grammatical and spelling checks. Instead, the important information is marked, creating an indicator of where the significant data is without compromising the purity of it. Secondly, the data is chunked into smaller paragraphs, giving specificity, and then cross-referenced with the keywords (allowing generalization over the whole document). This way, no data is harmed, and qualitative experts can go over the raw data instead of using highly manipulated results. Given the success in deliverable creation as well as the circumvention of this tradeoff, this methodology should stand as a model for synthesizing qualitative data while maintaining its original form.Keywords: B.A.R.T.model, keyword extractor, natural language processing, qualitative coding
Procedia PDF Downloads 271011 Drug Susceptibility and Genotypic Assessment of Mycobacterial Isolates from Pulmonary Tuberculosis Patients in North East Ethiopia
Authors: Minwuyelet Maru, Solomon Habtemariam, Endalamaw Gadissa, Abraham Aseffa
Abstract:
Background: Tuberculosis is a major public health problem in Ethiopia. The burden of TB is aggravated by emergence and expansion of drug resistant tuberculosis and different lineages of Mycobacterium tuberculosis (M. tuberculosis) have been reported in many parts of the country. Describing strains of Mycobacterial isolates and drug susceptibility pattern is necessary. Method: Sputum samples were collected from smear positive pulmonary TB patients age >= 7 years between October 1, 2012 to September 30, 2013 and Mycobacterial strains isolated on Loweensten Jensen (LJ) media. Each strain was characterized by deletion typing and Spoligotyping. Drug sensitivity testing was determined with the indirect proportion method using Middle brook 7H10 media and association to determine possible risk factors to drug resistance was done. Result: A total of 144 smear positive pulmonary tuberculosis patients were enrolled. The age of participants ranged from 7 to 78 with mean age of 29.22 (±10.77) years. In this study 82.2% (n=97) of the isolates were sensitive to the four first line anti-tuberculosis drugs and resistance to any of the four drugs tested was 17.8% (n=21). A high frequency of any resistance was observed in isoniazid, 13.6%, (n=16) followed by streptomycin, 11.8% (n=14). No significant association of isoniazid resistance with HIV, sex and history of previous TB treatment was observed but there was significant association with age, high between 31-35 years of age (p=0.01). Majority, 89.9% (n=128) of participants were new cases and only 11.1% (n=16) had history of previous TB treatment. No MDR-TB from new cases and 2 MDRTB (13.3%) was isolated from re-treatment cases which was significantly associated with previous TB treatment (p<0.01). Thirty two different types of spoligotype patterns were identified and 74.1% were grouped in to 13 clusters. The dominant strains were SIT 25, 18.1% (n=21), SIT 53, 17.2% (n=20) and SIT 149, 8.6% (n=10). Lineage 4 is the predominant lineage followed by lineage 3 and lineage 7 comprising 65.5% (n=76), 28.4% (n=33) and 6% (n=7) respectively. Majority of strains from lineage 3 and 4 were SIT 25 (63.6%) and SIT 53 (26.3%) whereas SIT 343 was the dominant strain from lineage 7 (71.4%). Conclusion: Wide spread of lineage 3 and lineage 4 of the modern lineage and high number of strain cluster indicates high ongoing transmission. The high proportion resistance to any of the first line anti-tuberculosis drugs may be a potential source in the emergence of MDR-TB. Wide spread of SIT 25 and SIT 53 having a tendency of ease transmission and presence of higher resistance of isoniazid in working and mobile age group, 31-35 years of age may increase risk of drug resistant strains transmission.Keywords: tuberculosis, drug susceptibility, strain diversity, lineage, Ethiopia, spoligotyping
Procedia PDF Downloads 3751010 Bioinformatics High Performance Computation and Big Data
Authors: Javed Mohammed
Abstract:
Right now, bio-medical infrastructure lags well behind the curve. Our healthcare system is dispersed and disjointed; medical records are a bit of a mess; and we do not yet have the capacity to store and process the crazy amounts of data coming our way from widespread whole-genome sequencing. And then there are privacy issues. Despite these infrastructure challenges, some researchers are plunging into bio medical Big Data now, in hopes of extracting new and actionable knowledge. They are doing delving into molecular-level data to discover bio markers that help classify patients based on their response to existing treatments; and pushing their results out to physicians in novel and creative ways. Computer scientists and bio medical researchers are able to transform data into models and simulations that will enable scientists for the first time to gain a profound under-standing of the deepest biological functions. Solving biological problems may require High-Performance Computing HPC due either to the massive parallel computation required to solve a particular problem or to algorithmic complexity that may range from difficult to intractable. Many problems involve seemingly well-behaved polynomial time algorithms (such as all-to-all comparisons) but have massive computational requirements due to the large data sets that must be analyzed. High-throughput techniques for DNA sequencing and analysis of gene expression have led to exponential growth in the amount of publicly available genomic data. With the increased availability of genomic data traditional database approaches are no longer sufficient for rapidly performing life science queries involving the fusion of data types. Computing systems are now so powerful it is possible for researchers to consider modeling the folding of a protein or even the simulation of an entire human body. This research paper emphasizes the computational biology's growing need for high-performance computing and Big Data. It illustrates this article’s indispensability in meeting the scientific and engineering challenges of the twenty-first century, and how Protein Folding (the structure and function of proteins) and Phylogeny Reconstruction (evolutionary history of a group of genes) can use HPC that provides sufficient capability for evaluating or solving more limited but meaningful instances. This article also indicates solutions to optimization problems, and benefits Big Data and Computational Biology. The article illustrates the Current State-of-the-Art and Future-Generation Biology of HPC Computing with Big Data.Keywords: high performance, big data, parallel computation, molecular data, computational biology
Procedia PDF Downloads 3621009 Ethiopian Textile and Apparel Industry: Study of the Information Technology Effects in the Sector to Improve Their Integrity Performance
Authors: Merertu Wakuma Rundassa
Abstract:
Global competition and rapidly changing customer requirements are forcing major changes in the production styles and configuration of manufacturing organizations. Increasingly, traditional centralized and sequential manufacturing planning, scheduling, and control mechanisms are being found insufficiently flexible to respond to changing production styles and highly dynamic variations in product requirements. The traditional approaches limit the expandability and reconfiguration capabilities of the manufacturing systems. Thus many business houses face increasing pressure to lower production cost, improve production quality and increase responsiveness to customers. In a textile and apparel manufacturing, globalization has led to increase in competition and quality awareness and these industries have changed tremendously in the last few years. So, to sustain competitive advantage, companies must re-examine and fine-tune their business processes to deliver high quality goods at very low costs and it has become very important for the textile and apparel industries to integrate themselves with information technology to survive. IT can create competitive advantages for companies to improve coordination and communication among trading partners, increase the availability of information for intermediaries and customers and provide added value at various stages along the entire chain. Ethiopia is in the process of realizing its potential as the future sourcing location for the global textile and garments industry. With a population of over 90 million people and the fastest growing non-oil economy in Africa, Ethiopia today represents limitless opportunities for international investors. For the textile and garments industry Ethiopia promises a low cost production location with natural resources such as cotton to enable the setup of vertically integrated textile and garment operation. However; due to lack of integration of their business activities textile and apparel industry of Ethiopia faced a problem in that it can‘t be competent in the global market. On the other hand the textile and apparel industries of other countries have changed tremendously in the last few years and globalization has led to increase in competition and quality awareness. So the aim of this paper is to study the trend of Ethiopian Textile and Apparel Industry on the application of different IT system to integrate them in the global market.Keywords: information technology, business integrity, textile and apparel industries, Ethiopia
Procedia PDF Downloads 3621008 Fabrication of Highly Stable Low-Density Self-Assembled Monolayers by Thiolyne Click Reaction
Authors: Leila Safazadeh, Brad Berron
Abstract:
Self-assembled monolayers have tremendous impact in interfacial science, due to the unique opportunity they offer to tailor surface properties. Low-density self-assembled monolayers are an emerging class of monolayers where the environment-interfacing portion of the adsorbate has a greater level of conformational freedom when compared to traditional monolayer chemistries. This greater range of motion and increased spacing between surface-bound molecules offers new opportunities in tailoring adsorption phenomena in sensing systems. In particular, we expect low-density surfaces to offer a unique opportunity to intercalate surface bound ligands into the secondary structure of protiens and other macromolecules. Additionally, as many conventional sensing surfaces are built upon gold surfaces (SPR or QCM), these surfaces must be compatible with gold substrates. Here, we present the first stable method of generating low-density self assembled monolayer surfaces on gold for the analysis of their interactions with protein targets. Our approach is based on the 2:1 addition of thiol-yne chemistry to develop new classes of y-shaped adsorbates on gold, where the environment-interfacing group is spaced laterally from neighboring chemical groups. This technique involves an initial deposition of a crystalline monolayer of 1,10 decanedithiol on the gold substrate, followed by grafting of a low-packed monolayer on through a photoinitiated thiol-yne reaction in presence of light. Orthogonality of the thiol-yne chemistry (commonly referred to as a click chemistry) allows for preparation of low-density monolayers with variety of functional groups. To date, carboxyl, amine, alcohol, and alkyl terminated monolayers have been prepared using this core technology. Results from surface characterization techniques such as FTIR, contact angle goniometry and electrochemical impedance spectroscopy confirm the proposed low chain-chain interactions of the environment interfacing groups. Reductive desorption measurements suggest a higher stability for the click-LDMs compared to traditional SAMs, along with the equivalent packing density at the substrate interface, which confirms the proposed stability of the monolayer-gold interface. In addition, contact angle measurements change in the presence of an applied potential, supporting our description of a surface structure which allows the alkyl chains to freely orient themselves in response to different environments. We are studying the differences in protein adsorption phenomena between well packed and our loosely packed surfaces, and we expect this data will be ready to present at the GRC meeting. This work aims to contribute biotechnology science in the following manner: Molecularly imprinted polymers are a promising recognition mode with several advantages over natural antibodies in the recognition of small molecules. However, because of their bulk polymer structure, they are poorly suited for the rapid diffusion desired for recognition of proteins and other macromolecules. Molecularly imprinted monolayers are an emerging class of materials where the surface is imprinted, and there is not a bulk material to impede mass transfer. Further, the short distance between the binding site and the signal transduction material improves many modes of detection. My dissertation project is to develop a new chemistry for protein-imprinted self-assembled monolayers on gold, for incorporation into SPR sensors. Our unique contribution is the spatial imprinting of not only physical cues (seen in current imprinted monolayer techniques), but to also incorporate complementary chemical cues. This is accomplished through a photo-click grafting of preassembled ligands around a protein template. This conference is important for my development as a graduate student to broaden my appreciation of the sensor development beyond surface chemistry.Keywords: low-density self-assembled monolayers, thiol-yne click reaction, molecular imprinting
Procedia PDF Downloads 2241007 Quantum Graph Approach for Energy and Information Transfer through Networks of Cables
Authors: Mubarack Ahmed, Gabriele Gradoni, Stephen C. Creagh, Gregor Tanner
Abstract:
High-frequency cables commonly connect modern devices and sensors. Interestingly, the proportion of electric components is rising fast in an attempt to achieve lighter and greener devices. Modelling the propagation of signals through these cable networks in the presence of parameter uncertainty is a daunting task. In this work, we study the response of high-frequency cable networks using both Transmission Line and Quantum Graph (QG) theories. We have successfully compared the two theories in terms of reflection spectra using measurements on real, lossy cables. We have derived a generalisation of the vertex scattering matrix to include non-uniform networks – networks of cables with different characteristic impedances and propagation constants. The QG model implicitly takes into account the pseudo-chaotic behavior, at the vertices, of the propagating electric signal. We have successfully compared the asymptotic growth of eigenvalues of the Laplacian with the predictions of Weyl law. We investigate the nearest-neighbour level-spacing distribution of the resonances and compare our results with the predictions of Random Matrix Theory (RMT). To achieve this, we will compare our graphs with the generalisation of Wigner distribution for open systems. The problem of scattering from networks of cables can also provide an analogue model for wireless communication in highly reverberant environments. In this context, we provide a preliminary analysis of the statistics of communication capacity for communication across cable networks, whose eventual aim is to enable detailed laboratory testing of information transfer rates using software defined radio. We specialise this analysis in particular for the case of MIMO (Multiple-Input Multiple-Output) protocols. We have successfully validated our QG model with both TL model and laboratory measurements. The growth of Eigenvalues compares well with Weyl’s law and the level-spacing distribution agrees so well RMT predictions. The results we achieved in the MIMO application compares favourably with the prediction of a parallel on-going research (sponsored by NEMF21.)Keywords: eigenvalues, multiple-input multiple-output, quantum graph, random matrix theory, transmission line
Procedia PDF Downloads 1721006 Landing Performance Improvement Using Genetic Algorithm for Electric Vertical Take Off and Landing Aircrafts
Authors: Willian C. De Brito, Hernan D. C. Munoz, Erlan V. C. Carvalho, Helder L. C. De Oliveira
Abstract:
In order to improve commute time for small distance trips and relieve large cities traffic, a new transport category has been the subject of research and new designs worldwide. The air taxi travel market promises to change the way people live and commute by using the concept of vehicles with the ability to take-off and land vertically and to provide passenger’s transport equivalent to a car, with mobility within large cities and between cities. Today’s civil air transport remains costly and accounts for 2% of the man-made CO₂ emissions. Taking advantage of this scenario, many companies have developed their own Vertical Take Off and Landing (VTOL) design, seeking to meet comfort, safety, low cost and flight time requirements in a sustainable way. Thus, the use of green power supplies, especially batteries, and fully electric power plants is the most common choice for these arising aircrafts. However, it is still a challenge finding a feasible way to handle with the use of batteries rather than conventional petroleum-based fuels. The batteries are heavy and have an energy density still below from those of gasoline, diesel or kerosene. Therefore, despite all the clear advantages, all electric aircrafts (AEA) still have low flight autonomy and high operational cost, since the batteries must be recharged or replaced. In this sense, this paper addresses a way to optimize the energy consumption in a typical mission of an aerial taxi aircraft. The approach and landing procedure was chosen to be the subject of an optimization genetic algorithm, while final programming can be adapted for take-off and flight level changes as well. A real tilt rotor aircraft with fully electric power plant data was used to fit the derived dynamic equations of motion. Although a tilt rotor design is used as a proof of concept, it is possible to change the optimization to be applied for other design concepts, even those with independent motors for hover and cruise flight phases. For a given trajectory, the best set of control variables are calculated to provide the time history response for aircraft´s attitude, rotors RPM and thrust direction (or vertical and horizontal thrust, for independent motors designs) that, if followed, results in the minimum electric power consumption through that landing path. Safety, comfort and design constraints are assumed to give representativeness to the solution. Results are highly dependent on these constraints. For the tested cases, performance improvement ranged from 5 to 10% changing initial airspeed, altitude, flight path angle, and attitude.Keywords: air taxi travel, all electric aircraft, batteries, energy consumption, genetic algorithm, landing performance, optimization, performance improvement, tilt rotor, VTOL design
Procedia PDF Downloads 1131005 Physical Properties Characterization of Shallow Aquifer and Groundwater Quality Using Geophysical Method Based on Electrical Resistivity Tomography in Arid Region, Northeastern Area of Tunisia: A Study Case of Smar Aquifer
Authors: Nesrine Frifita
Abstract:
In recent years, serious interest in underground sources has led to more intensive studies of depth, thickness, geometry and properties of aquifers. Geophysical method is the common technique used in discovering the subsurface. However, determining the exact location of groundwater in subsurface layers is one of problems that needs to be resolved. While the biggest problem is the quality of the groundwater which suffers from pollution risk especially with water shortage in arid regions under a remarkable climate change. The present study was conducted using electrical resistivity tomography at Jeffara coastal area in Southeast Tunisia to image the potential shallow aquifer and studying their physical properties. The purpose of this study is to understand the characteristics and depth of the Smar aquifer. Therefore, it can be used as a reference in groundwater drilling in order to guide the farmers and to improve the living of the inhabitants of nearby cities. The use of the Winner-Schlumberger array for data acquisition is suitable to obtain a deeper profile in areas with homogeneous layers. For that, six electrical resistivity profiles were carried out in Smar watershed using 72 electrodes with 4 and 5 m spacing. The resistivity measurements were carefully interpreted by a least-square inversion technique using the RES2DINV program. Findings show that the Smar aquifer has about 31 m thickness and it extends to 36.5 m depth in the downstream area of Oued Smar. The defined depth and geometry of Smar aquifer indicate that the sedimentary cover thins toward the coast, and the Smar shallow aquifer becomes deeper toward the West. While the resistivity values show a significant contrast even reaching < 1 Ωm in ERT1, this resistivity value can be related to the saline water that foretells a risk of pollution and bad groundwater quality. The ERT1 geoelectrical model defines an unsaturated zone, while under ERT3 site, the geoelectrical model presents a saturated zone, which reflect a low resistivity values indicate the locally surface water coming from the nearby Office of the National Sanitation Utility (ONAS) that can be a source of recharge of the studied shallow aquifer and more deteriorate the groundwater quality in this region.Keywords: electrical resistivity tomography, groundwater, recharge, smar aquifer, southeastern tunisia
Procedia PDF Downloads 731004 Determination of the Needs for Development of Infertility Psycho-Educational Program and the Design of a Website about Infertility for University Students
Authors: Bahar Baran, Şirin Nur Kaptan, D.Yelda Kağnıcı, Erol Esen, Barışcan Öztürk, Ender Siyez, Diğdem M Siyez
Abstract:
It is known that some factors associated with infertility have preventable characteristics and that young people's knowledge levels in this regard are inadequate, but very few studies focus on effective prevention studies on infertility. Psycho-educational programs have an important place for infertility prevention efforts. Nowadays, considering the households' utilization rates from technology and the Internet, it seems that young people have applied to websites as a primary source of information related to a health problem they have encountered. However, one of the prerequisites for the effectiveness of websites or face-to-face psycho-education programs is to consider the needs of participants. In particular, it is expected that these programs will be appropriate to the cultural infrastructure and the diversity of beliefs and values in society. The aim of this research is to determine what university students want to learn about infertility and fertility and examine their views on the structure of the website. The sample of the research consisted of 9693 university students who study in 21 public higher education programs in Turkey. 51.6 % (n = 5002) were female and 48.4% (n = 4691) were male. The Needs Analysis Questionnaire developed by the researchers was used as data collection tool in the research. In the analysis of the data, descriptive analysis was conducted in SPSS software. According to the findings, among the topics that university students wanted to study about infertility and fertility, the first topics were 'misconceptions about infertility' (94.9 %), 'misconceptions about sexual behaviors' (94.6 %), 'factors affecting infertility' (92.8 %), 'sexual health and reproductive health' (92.5 %), 'sexually transmitted diseases' (92.7 %), 'sexuality and society' (90.9 %), 'healthy life (help centers)' (90.4 %). In addition, the questions about how the content of the website should be designed for university students were analyzed descriptively. According to the results, 91.5 % (n = 8871) of the university students proposed to use frequently asked questions and their answers, 89.2 % (n = 8648) stated that expert video should be included, 82.6 % (n = 8008) requested animations and simulations, 76.1 % (n = 7380) proposed different content according to sex and 66 % (n = 6460) proposed different designs according to sex. The results of the research indicated that the findings are similar to the contents of the program carried out in other countries in terms of the topics to be studied. It is suggested to take into account the opinions of the participants during the design of website.Keywords: infertility, prevention, psycho-education, web based education
Procedia PDF Downloads 2121003 Domestic Violence against Women and the Nutritional Status of Their Under-5 Children: A Cross Sectional Survey in Urban Slums of Chittagong, Bangladesh
Authors: Mohiuddin Ahsanul Kabir Chowdhury, Ahmed Ehsanur Rahman, Nazia Binte Ali, Abdullah Nurus Salam Khan, Afrin Iqbal, Mohammad Mehedi Hasan, Salma Morium, Afsana Bhuiyan, Shams El Arifeen
Abstract:
Violence against women has been treated as a global epidemic which is as fatal as any serious disease or accidents. Like many other low-income countries it is also common in Bangladesh. In spite of existence of a few documented evidences in some other countries, in Bangladesh, domestic violence against women (DVAW) is not considered as a factor for malnutrition in children yet. Hence, the aim of the study was to investigate the association between DVAW and the nutritional status of their under-5 children in the context of slum areas of Chittagong, Bangladesh. A Cross-sectional survey was conducted among 87 women of reproductive age having at least one child under-5 years of age and staying with husband for at least last 1 year in selected slums under Chittagong City Corporation area. Data collection tools were structured questionnaire for the study participants and mid-upper arm circumference (MUAC) to measure the nutritional status of the under-5 children. The data underwent descriptive and regression analysis. Out of 87 respondents, 50 (57.5%) reported to suffer from domestic violence by their husband during last one year. Physical violence was found to be significantly associated with age (p=0.02), age at marriage (p=0.043), wealth score (p=0.000), and with knowledge regarding law (p=0.017). According to the measurement of mid-upper arm circumference (MUAC) 21% children were suffering from severe acute malnutrition (SAM) and the same percentage of children were suffering from moderate acute malnutrition (MAM). However, unadjusted odds ratio suggested that there was negative association with domestic violence and nutritional status. But, the logistic regression confounding for other variable showed significant association with total family income (p=0.006), wealth score (p=0.031), age at marriage (p=0.029) and number of child (p=0.006). Domestic violence against women and under nutrition of the children, both are highly prevalent in Bangladesh. More extensive research should be performed to identify the factors contributing to the high prevalence of domestic violence and malnutrition in urban slums of Bangladesh. Household-based intervention is needed to limit this burning problem. In a nutshell, effective community participation, education and counseling are essential to create awareness among the community.Keywords: Bangladesh, cross sectional survey, domestic violence against women, nutritional status, under-5 children, urban slums
Procedia PDF Downloads 1951002 Machine learning Assisted Selective Emitter design for Solar Thermophotovoltaic System
Authors: Ambali Alade Odebowale, Andargachew Mekonnen Berhe, Haroldo T. Hattori, Andrey E. Miroshnichenko
Abstract:
Solar thermophotovoltaic systems (STPV) have emerged as a promising solution to overcome the Shockley-Queisser limit, a significant impediment in the direct conversion of solar radiation into electricity using conventional solar cells. The STPV system comprises essential components such as an optical concentrator, selective emitter, and a thermophotovoltaic (TPV) cell. The pivotal element in achieving high efficiency in an STPV system lies in the design of a spectrally selective emitter or absorber. Traditional methods for designing and optimizing selective emitters are often time-consuming and may not yield highly selective emitters, posing a challenge to the overall system performance. In recent years, the application of machine learning techniques in various scientific disciplines has demonstrated significant advantages. This paper proposes a novel nanostructure composed of four-layered materials (SiC/W/SiO2/W) to function as a selective emitter in the energy conversion process of an STPV system. Unlike conventional approaches widely adopted by researchers, this study employs a machine learning-based approach for the design and optimization of the selective emitter. Specifically, a random forest algorithm (RFA) is employed for the design of the selective emitter, while the optimization process is executed using genetic algorithms. This innovative methodology holds promise in addressing the challenges posed by traditional methods, offering a more efficient and streamlined approach to selective emitter design. The utilization of a machine learning approach brings several advantages to the design and optimization of a selective emitter within the STPV system. Machine learning algorithms, such as the random forest algorithm, have the capability to analyze complex datasets and identify intricate patterns that may not be apparent through traditional methods. This allows for a more comprehensive exploration of the design space, potentially leading to highly efficient emitter configurations. Moreover, the application of genetic algorithms in the optimization process enhances the adaptability and efficiency of the overall system. Genetic algorithms mimic the principles of natural selection, enabling the exploration of a diverse range of emitter configurations and facilitating the identification of optimal solutions. This not only accelerates the design and optimization process but also increases the likelihood of discovering configurations that exhibit superior performance compared to traditional methods. In conclusion, the integration of machine learning techniques in the design and optimization of a selective emitter for solar thermophotovoltaic systems represents a groundbreaking approach. This innovative methodology not only addresses the limitations of traditional methods but also holds the potential to significantly improve the overall performance of STPV systems, paving the way for enhanced solar energy conversion efficiency.Keywords: emitter, genetic algorithm, radiation, random forest, thermophotovoltaic
Procedia PDF Downloads 601001 Smart Irrigation Systems and Website: Based Platform for Farmer Welfare
Authors: Anusha Jain, Santosh Vishwanathan, Praveen K. Gupta, Shwetha S., Kavitha S. N.
Abstract:
Agriculture has a major impact on the Indian economy, with the highest employment ratio than any sector of the country. Currently, most of the traditional agricultural practices and farming methods are manual, which results in farmers not realizing their maximum productivity often due to increasing in labour cost, inefficient use of water sources leading to wastage of water, inadequate soil moisture content, subsequently leading to food insecurity of the country. This research paper aims to solve this problem by developing a full-fledged web application-based platform that has the capacity to associate itself with a Microcontroller-based Automated Irrigation System which schedules the irrigation of crops based on real-time soil moisture content employing soil moisture sensors centric to the crop’s requirements using WSN (Wireless Sensor Networks) and M2M (Machine To Machine Communication) concepts, thus optimizing the use of the available limited water resource, thereby maximizing the crop yield. This robust automated irrigation system provides end-to-end automation of Irrigation of crops at any circumstances such as droughts, irregular rainfall patterns, extreme weather conditions, etc. This platform will also be capable of achieving a nationwide united farming community and ensuring the welfare of farmers. This platform is designed to equip farmers with prerequisite knowledge on tech and the latest farming practices in general. In order to achieve this, the MailChimp mailing service is used through which interested farmers/individuals' email id will be recorded and curated articles on innovations in the world of agriculture will be provided to the farmers via e-mail. In this proposed system, service is enabled on the platform where nearby crop vendors will be able to enter their pickup locations, accepted prices and other relevant information. This will enable farmers to choose their vendors wisely. Along with this, we have created a blogging service that will enable farmers and agricultural enthusiasts to share experiences, helpful knowledge, hardships, etc., with the entire farming community. These are some of the many features that the platform has to offer.Keywords: WSN (wireless sensor networks), M2M (M/C to M/C communication), automation, irrigation system, sustainability, SAAS (software as a service), soil moisture sensor
Procedia PDF Downloads 1281000 Optimization of Cobalt Oxide Conversion to Co-Based Metal-Organic Frameworks
Authors: Aleksander Ejsmont, Stefan Wuttke, Joanna Goscianska
Abstract:
Gaining control over particle shape, size and crystallinity is an ongoing challenge for many materials. Especially metalorganic frameworks (MOFs) are recently widely studied. Besides their remarkable porosity and interesting topologies, morphology has proven to be a significant feature. It can affect the further material application. Thus seeking new approaches that enable MOF morphology modulation is important. MOFs are reticular structures, where building blocks are made up of organic linkers and metallic nodes. The most common strategy of ensuring metal source is using salts, which usually exhibit high solubility and hinder morphology control. However, there has been a growing interest in using metal oxides as structure-directing agents towards MOFs due to their very low solubility and shape preservation. Metal oxides can be treated as a metal reservoir during MOF synthesis. Up to now, reports in which receiving MOFs from metal oxides mostly present ZnO conversion to ZIF-8. However, there are other oxides, for instance, Co₃O₄, which often is overlooked due to their structural stability and insolubility in aqueous solutions. Cobalt-based materials are famed for catalytic activity. Therefore the development of their efficient synthesis is worth attention. In the presented work, an optimized Co₃O₄transition to Co-MOFviaa solvothermal approach was proposed. The starting point of the research was the synthesis of Co₃O₄ flower petals and needles under hydrothermal conditions using different cobalt salts (e.g., cobalt(II) chloride and cobalt(II) nitrate), in the presence of urea, and hexadecyltrimethylammonium bromide (CTAB) surfactant as a capping agent. After receiving cobalt hydroxide, the calcination process was performed at various temperatures (300–500 °C). Then cobalt oxides as a source of cobalt cations were subjected to reaction with trimesic acid in solvothermal environment and temperature of 120 °C leading to Co-MOF fabrication. The solution maintained in the system was a mixture of water, dimethylformamide, and ethanol, with the addition of strong acids (HF and HNO₃). To establish how solvents affect metal oxide conversion, several different solvent ratios were also applied. The materials received were characterized with analytical techniques, including X-ray powder diffraction, energy dispersive spectroscopy,low-temperature nitrogen adsorption/desorption, scanning, and transmission electron microscopy. It was confirmed that the synthetic routes have led to the formation of Co₃O₄ and Co-based MOF varied in shape and size of particles. The diffractograms showed receiving crystalline phase for Co₃O₄, and also for Co-MOF. The Co₃O₄ obtained from nitrates and with using low-temperature calcination resulted in smaller particles. The study indicated that cobalt oxide particles of different size influence the efficiency of conversion and morphology of Co-MOF. The highest conversion was achieved using metal oxides with small crystallites.Keywords: Co-MOF, solvothermal synthesis, morphology control, core-shell
Procedia PDF Downloads 160999 Peer Instruction, Technology, Education for Textile and Fashion Students
Authors: Jimmy K. C. Lam, Carrie Wong
Abstract:
One of the key goals on Learning and Teaching as documented in the University strategic plan 2012/13 – 2017/18 is to encourage active learning, the use of innovative teaching approaches and technology, and promoting the adoption of flexible and varied teaching delivery methods. This research reported the recent visited to Prof Eric Mazur at Harvard University on Peer Instruction: Collaborative learning in large class and innovative use of technology to enable new mode of learning. Peer Instruction is a research-based, interactive teaching method developed by Prof. Eric Mazur at Harvard University in the 1990s. It has been adopted across the disciplines, institutional type and throughout the world. One problem with conventional teaching lies in the presentation of the material. Frequently, it comes straight out of textbook/notes, giving students little incentive to attend class. This traditional presentation is always delivered as monologue in front of passive audience. Only exceptional lecturers are capable of holding students’ attention for an entire lecture period. Consequently, lectures simply reinforce students’ feelings that the most important step in mastering the material is memorizing a zoo of unrelated examples. In order to address these misconceptions about learning, Prof Mazur’s Team developed “Peer Instruction”, a method which involves students in their own learning during lectures and focuses their attention on underling concepts. Lectures are interspersed with conceptual questions called Concept Tests, designed to expose common difficulties in understanding the material. The students are given one or two minutes to think about the question and formulate their own answers; they then spend two or three minutes discussing their answers in a group of three or four, attempting to reach consensus on the correct answer. This process forces the students to think through the arguments being developed, and enable them to assess their understanding concepts before they leave the classroom. The findings from Peer Instruction and innovative use of technology on teaching at Harvard University were applied to the first year Textiles and Fashion students in Hong Kong. Survey conducted from 100 students showed that over 80% students enjoyed the flexibility of peer instruction and 70% of them enjoyed the instant feedback from the Clicker system (Student Response System used at Harvard University). Further work will continue to explore the possibility of peer instruction to art and fashion students.Keywords: peer instruction, education, technology, fashion
Procedia PDF Downloads 315998 A Web and Cloud-Based Measurement System Analysis Tool for the Automotive Industry
Authors: C. A. Barros, Ana P. Barroso
Abstract:
Any industrial company needs to determine the amount of variation that exists within its measurement process and guarantee the reliability of their data, studying the performance of their measurement system, in terms of linearity, bias, repeatability and reproducibility and stability. This issue is critical for automotive industry suppliers, who are required to be certified by the 16949:2016 standard (replaces the ISO/TS 16949) of International Automotive Task Force, defining the requirements of a quality management system for companies in the automotive industry. Measurement System Analysis (MSA) is one of the mandatory tools. Frequently, the measurement system in companies is not connected to the equipment and do not incorporate the methods proposed by the Automotive Industry Action Group (AIAG). To address these constraints, an R&D project is in progress, whose objective is to develop a web and cloud-based MSA tool. This MSA tool incorporates Industry 4.0 concepts, such as, Internet of Things (IoT) protocols to assure the connection with the measuring equipment, cloud computing, artificial intelligence, statistical tools, and advanced mathematical algorithms. This paper presents the preliminary findings of the project. The web and cloud-based MSA tool is innovative because it implements all statistical tests proposed in the MSA-4 reference manual from AIAG as well as other emerging methods and techniques. As it is integrated with the measuring devices, it reduces the manual input of data and therefore the errors. The tool ensures traceability of all performed tests and can be used in quality laboratories and in the production lines. Besides, it monitors MSAs over time, allowing both the analysis of deviations from the variation of the measurements performed and the management of measurement equipment and calibrations. To develop the MSA tool a ten-step approach was implemented. Firstly, it was performed a benchmarking analysis of the current competitors and commercial solutions linked to MSA, concerning Industry 4.0 paradigm. Next, an analysis of the size of the target market for the MSA tool was done. Afterwards, data flow and traceability requirements were analysed in order to implement an IoT data network that interconnects with the equipment, preferably via wireless. The MSA web solution was designed under UI/UX principles and an API in python language was developed to perform the algorithms and the statistical analysis. Continuous validation of the tool by companies is being performed to assure real time management of the ‘big data’. The main results of this R&D project are: MSA Tool, web and cloud-based; Python API; New Algorithms to the market; and Style Guide of UI/UX of the tool. The MSA tool proposed adds value to the state of the art as it ensures an effective response to the new challenges of measurement systems, which are increasingly critical in production processes. Although the automotive industry has triggered the development of this innovative MSA tool, other industries would also benefit from it. Currently, companies from molds and plastics, chemical and food industry are already validating it.Keywords: automotive Industry, industry 4.0, Internet of Things, IATF 16949:2016, measurement system analysis
Procedia PDF Downloads 213997 Test Method Development for Evaluation of Process and Design Effect on Reinforced Tube
Authors: Cathal Merz, Gareth O’Donnell
Abstract:
Coil reinforced thin-walled (CRTW) tubes are used in medicine to treat problems affecting blood vessels within the body through minimally invasive procedures. The CRTW tube considered in this research makes up part of such a device and is inserted into the patient via their femoral or brachial arteries and manually navigated to the site in need of treatment. This procedure replaces the requirement to perform open surgery but is limited by reduction of blood vessel lumen diameter and increase in tortuosity of blood vessels deep in the brain. In order to maximize the capability of these procedures, CRTW tube devices are being manufactured with decreasing wall thicknesses in order to deliver treatment deeper into the body and to allow passage of other devices through its inner diameter. This introduces significant stresses to the device materials which have resulted in an observed increase in the breaking of the proximal segment of the device into two separate pieces after it has failed by buckling. As there is currently no international standard for measuring the mechanical properties of these CRTW tube devices, it is difficult to accurately analyze this problem. The aim of the current work is to address this discrepancy in the biomedical device industry by developing a measurement system that can be used to quantify the effect of process and design changes on CRTW tube performance, aiding in the development of better performing, next generation devices. Using materials testing frames, micro-computed tomography (micro-CT) imaging, experiment planning, analysis of variance (ANOVA), T-tests and regression analysis, test methods have been developed for assessing the impact of process and design changes on the device. The major findings of this study have been an insight into the suitability of buckle and three-point bend tests for the measurement of the effect of varying processing factors on the device’s performance, and guidelines for interpreting the output data from the test methods. The findings of this study are of significant interest with respect to verifying and validating key process and design changes associated with the device structure and material condition. Test method integrity evaluation is explored throughout.Keywords: neurovascular catheter, coil reinforced tube, buckling, three-point bend, tensile
Procedia PDF Downloads 116996 Machine Learning and Internet of Thing for Smart-Hydrology of the Mantaro River Basin
Authors: Julio Jesus Salazar, Julio Jesus De Lama
Abstract:
the fundamental objective of hydrological studies applied to the engineering field is to determine the statistically consistent volumes or water flows that, in each case, allow us to size or design a series of elements or structures to effectively manage and develop a river basin. To determine these values, there are several ways of working within the framework of traditional hydrology: (1) Study each of the factors that influence the hydrological cycle, (2) Study the historical behavior of the hydrology of the area, (3) Study the historical behavior of hydrologically similar zones, and (4) Other studies (rain simulators or experimental basins). Of course, this range of studies in a certain basin is very varied and complex and presents the difficulty of collecting the data in real time. In this complex space, the study of variables can only be overcome by collecting and transmitting data to decision centers through the Internet of things and artificial intelligence. Thus, this research work implemented the learning project of the sub-basin of the Shullcas river in the Andean basin of the Mantaro river in Peru. The sensor firmware to collect and communicate hydrological parameter data was programmed and tested in similar basins of the European Union. The Machine Learning applications was programmed to choose the algorithms that direct the best solution to the determination of the rainfall-runoff relationship captured in the different polygons of the sub-basin. Tests were carried out in the mountains of Europe, and in the sub-basins of the Shullcas river (Huancayo) and the Yauli river (Jauja) with heights close to 5000 m.a.s.l., giving the following conclusions: to guarantee a correct communication, the distance between devices should not pass the 15 km. It is advisable to minimize the energy consumption of the devices and avoid collisions between packages, the distances oscillate between 5 and 10 km, in this way the transmission power can be reduced and a higher bitrate can be used. In case the communication elements of the devices of the network (internet of things) installed in the basin do not have good visibility between them, the distance should be reduced to the range of 1-3 km. The energy efficiency of the Atmel microcontrollers present in Arduino is not adequate to meet the requirements of system autonomy. To increase the autonomy of the system, it is recommended to use low consumption systems, such as the Ashton Raggatt McDougall or ARM Cortex L (Ultra Low Power) microcontrollers or even the Cortex M; and high-performance direct current (DC) to direct current (DC) converters. The Machine Learning System has initiated the learning of the Shullcas system to generate the best hydrology of the sub-basin. This will improve as machine learning and the data entered in the big data coincide every second. This will provide services to each of the applications of the complex system to return the best data of determined flows.Keywords: hydrology, internet of things, machine learning, river basin
Procedia PDF Downloads 158995 Food Safety in Wine: Removal of Ochratoxin a in Contaminated White Wine Using Commercial Fining Agents
Authors: Antònio Inês, Davide Silva, Filipa Carvalho, Luís Filipe-Riberiro, Fernando M. Nunes, Luís Abrunhosa, Fernanda Cosme
Abstract:
The presence of mycotoxins in foodstuff is a matter of concern for food safety. Mycotoxins are toxic secondary metabolites produced by certain molds, being ochratoxin A (OTA) one of the most relevant. Wines can also be contaminated with these toxicants. Several authors have demonstrated the presence of mycotoxins in wine, especially ochratoxin A. Its chemical structure is a dihydro-isocoumarin connected at the 7-carboxy group to a molecule of L-β-phenylalanine via an amide bond. As these toxicants can never be completely removed from the food chain, many countries have defined levels in food in order to attend health concerns. OTA contamination of wines might be a risk to consumer health, thus requiring treatments to achieve acceptable standards for human consumption. The maximum acceptable level of OTA in wines is 2.0 μg/kg according to the Commission regulation No. 1881/2006. Therefore, the aim of this work was to reduce OTA to safer levels using different fining agents, as well as their impact on white wine physicochemical characteristics. To evaluate their efficiency, 11 commercial fining agents (mineral, synthetic, animal and vegetable proteins) were used to get new approaches on OTA removal from white wine. Trials (including a control without addition of a fining agent) were performed in white wine artificially supplemented with OTA (10 µg/L). OTA analyses were performed after wine fining. Wine was centrifuged at 4000 rpm for 10 min and 1 mL of the supernatant was collected and added of an equal volume of acetonitrile/methanol/acetic acid (78:20:2 v/v/v). Also, the solid fractions obtained after fining, were centrifuged (4000 rpm, 15 min), the resulting supernatant discarded, and the pellet extracted with 1 mL of the above solution and 1 mL of H2O. OTA analysis was performed by HPLC with fluorescence detection. The most effective fining agent in removing OTA (80%) from white wine was a commercial formulation that contains gelatin, bentonite and activated carbon. Removals between 10-30% were obtained with potassium caseinate, yeast cell walls and pea protein. With bentonites, carboxymethylcellulose, polyvinylpolypyrrolidone and chitosan no considerable OTA removal was verified. Following, the effectiveness of seven commercial activated carbons was also evaluated and compared with the commercial formulation that contains gelatin, bentonite and activated carbon. The different activated carbons were applied at the concentration recommended by the manufacturer in order to evaluate their efficiency in reducing OTA levels. Trial and OTA analysis were performed as explained previously. The results showed that in white wine all activated carbons except one reduced 100% of OTA. The commercial formulation that contains gelatin, bentonite and activated carbon reduced only 73% of OTA concentration. These results may provide useful information for winemakers, namely for the selection of the most appropriate oenological product for OTA removal, reducing wine toxicity and simultaneously enhancing food safety and wine quality.Keywords: wine, ota removal, food safety, fining
Procedia PDF Downloads 537994 The Use of Venous Glucose, Serum Lactate and Base Deficit as Biochemical Predictors of Mortality in Polytraumatized Patients: Acomparative with Trauma and Injury Severity Score and Acute Physiology and Chronic Health Evalution IV
Authors: Osama Moustafa Zayed
Abstract:
Aim of the work: To evaluate the effectiveness of venous glucose, levels of serum lactate and base deficit in polytraumatized patients as simple parameters to predict the mortality in these patients. Compared to the predictive value of Trauma and injury severity (TRISS) and Acute Physiology And Chronic Health Evaluation IV (APACHE IV). Introduction: Trauma is a serious global health problem, accounting for approximately one in 10 deaths worldwide. Trauma accounts for 5 million deaths per year. Prediction of mortality in trauma patients is an important part of trauma care. Several trauma scores have been devised to predict injury severity and risk of mortality. The trauma and injury severity score (TRISS) was most common used. Regardless of the accuracy of trauma scores, is based on an anatomical description of every injury and cannot be assigned to the patients until a full diagnostic procedure has been performed. So we hypothesized that alterations in admission glucose, lactate levels and base deficit would be an early and easy rapid predictor of mortality. Patient and Method: a comparative cross-sectional study. 282 Polytraumatized patients attended to the Emergency Department(ED) of the Suez Canal university Hospital constituted. The period from 1/1/2012 to 1/4/2013 was included. Results: We found that the best cut off value of TRISS probability of survival score for prediction of mortality among poly-traumatized patients is = 90, with 77% sensitivity and 89% specificity using area under the ROC curve (0.89) at (95%CI). APACHE IV demonstrated 67% sensitivity and 95% specificity at 95% CI at cut off point 99. The best cutoff value of Random Blood Sugar (RBS) for prediction of mortality was>140 mg/dl, with 89%, sensitivity, 49% specificity. The best cut off value of base deficit for prediction of mortality was less than -5.6 with 64% sensitivity, 93% specificity. The best cutoff point of lactate for prediction of mortality was > 2.6 mmol/L with 92%, sensitivity, 42% specificity. Conclusion: According to our results from all evaluated predictors of mortality (laboratory and scores) and mortality based on the estimated cutoff values using ROC curves analysis, the highest risk of mortality was found using a cutoff value of 90 in TRISS score while with laboratory parameters the highest risk of mortality was with serum lactate > 2.6 . Although that all of the three parameter are accurate in predicting mortality in poly-traumatized patients and near with each other, as in serum lactate the area under the curve 0.82, in BD 0.79 and 0.77 in RBS.Keywords: APACHE IV, emergency department, polytraumatized patients, serum lactate
Procedia PDF Downloads 292