Search results for: chemical evaluation
721 Food Safety in Wine: Removal of Ochratoxin a in Contaminated White Wine Using Commercial Fining Agents
Authors: Antònio Inês, Davide Silva, Filipa Carvalho, Luís Filipe-Riberiro, Fernando M. Nunes, Luís Abrunhosa, Fernanda Cosme
Abstract:
The presence of mycotoxins in foodstuff is a matter of concern for food safety. Mycotoxins are toxic secondary metabolites produced by certain molds, being ochratoxin A (OTA) one of the most relevant. Wines can also be contaminated with these toxicants. Several authors have demonstrated the presence of mycotoxins in wine, especially ochratoxin A. Its chemical structure is a dihydro-isocoumarin connected at the 7-carboxy group to a molecule of L-β-phenylalanine via an amide bond. As these toxicants can never be completely removed from the food chain, many countries have defined levels in food in order to attend health concerns. OTA contamination of wines might be a risk to consumer health, thus requiring treatments to achieve acceptable standards for human consumption. The maximum acceptable level of OTA in wines is 2.0 μg/kg according to the Commission regulation No. 1881/2006. Therefore, the aim of this work was to reduce OTA to safer levels using different fining agents, as well as their impact on white wine physicochemical characteristics. To evaluate their efficiency, 11 commercial fining agents (mineral, synthetic, animal and vegetable proteins) were used to get new approaches on OTA removal from white wine. Trials (including a control without addition of a fining agent) were performed in white wine artificially supplemented with OTA (10 µg/L). OTA analyses were performed after wine fining. Wine was centrifuged at 4000 rpm for 10 min and 1 mL of the supernatant was collected and added of an equal volume of acetonitrile/methanol/acetic acid (78:20:2 v/v/v). Also, the solid fractions obtained after fining, were centrifuged (4000 rpm, 15 min), the resulting supernatant discarded, and the pellet extracted with 1 mL of the above solution and 1 mL of H2O. OTA analysis was performed by HPLC with fluorescence detection. The most effective fining agent in removing OTA (80%) from white wine was a commercial formulation that contains gelatin, bentonite and activated carbon. Removals between 10-30% were obtained with potassium caseinate, yeast cell walls and pea protein. With bentonites, carboxymethylcellulose, polyvinylpolypyrrolidone and chitosan no considerable OTA removal was verified. Following, the effectiveness of seven commercial activated carbons was also evaluated and compared with the commercial formulation that contains gelatin, bentonite and activated carbon. The different activated carbons were applied at the concentration recommended by the manufacturer in order to evaluate their efficiency in reducing OTA levels. Trial and OTA analysis were performed as explained previously. The results showed that in white wine all activated carbons except one reduced 100% of OTA. The commercial formulation that contains gelatin, bentonite and activated carbon reduced only 73% of OTA concentration. These results may provide useful information for winemakers, namely for the selection of the most appropriate oenological product for OTA removal, reducing wine toxicity and simultaneously enhancing food safety and wine quality.Keywords: wine, ota removal, food safety, fining
Procedia PDF Downloads 538720 Statistical Optimization of Adsorption of a Harmful Dye from Aqueous Solution
Abstract:
Textile industries cater to varied customer preferences and contribute substantially to the economy. However, these textile industries also produce a considerable amount of effluents. Prominent among these are the azo dyes which impart considerable color and toxicity even at low concentrations. Azo dyes are also used as coloring agents in food and pharmaceutical industry. Despite their applications, azo dyes are also notorious pollutants and carcinogens. Popular techniques like photo-degradation, biodegradation and the use of oxidizing agents are not applicable for all kinds of dyes, as most of them are stable to these techniques. Chemical coagulation produces a large amount of toxic sludge which is undesirable and is also ineffective towards a number of dyes. Most of the azo dyes are stable to UV-visible light irradiation and may even resist aerobic degradation. Adsorption has been the most preferred technique owing to its less cost, high capacity and process efficiency and the possibility of regenerating and recycling the adsorbent. Adsorption is also most preferred because it may produce high quality of the treated effluent and it is able to remove different kinds of dyes. However, the adsorption process is influenced by many variables whose inter-dependence makes it difficult to identify optimum conditions. The variables include stirring speed, temperature, initial concentration and adsorbent dosage. Further, the internal diffusional resistance inside the adsorbent particle leads to slow uptake of the solute within the adsorbent. Hence, it is necessary to identify optimum conditions that lead to high capacity and uptake rate of these pollutants. In this work, commercially available activated carbon was chosen as the adsorbent owing to its high surface area. A typical azo dye found in textile effluent waters, viz. the monoazo Acid Orange 10 dye (CAS: 1936-15-8) has been chosen as the representative pollutant. Adsorption studies were mainly focused at obtaining equilibrium and kinetic data for the batch adsorption process at different process conditions. Studies were conducted at different stirring speed, temperature, adsorbent dosage and initial dye concentration settings. The Full Factorial Design was the chosen statistical design framework for carrying out the experiments and identifying the important factors and their interactions. The optimum conditions identified from the experimental model were validated with actual experiments at the recommended settings. The equilibrium and kinetic data obtained were fitted to different models and the model parameters were estimated. This gives more details about the nature of adsorption taking place. Critical data required to design batch adsorption systems for removal of Acid Orange 10 dye and identification of factors that critically influence the separation efficiency are the key outcomes from this research.Keywords: acid orange 10, activated carbon, optimum adsorption conditions, statistical design
Procedia PDF Downloads 169719 Pregnancy Outcome in Women with HIV Infection from a Tertiary Care Centre of India
Authors: Kavita Khoiwal, Vatsla Dadhwal, K. Aparna Sharma, Dipika Deka, Plabani Sarkar
Abstract:
Introduction: About 2.4 million (1.93 - 3.04 million) people are living with HIV/AIDS in India. Of all HIV infections, 39% (9,30,000) are among women. 5.4% of infections are from mother to child transmission (MTCT), 25,000 infected children are born every year. Besides the risk of mother to child transmission of HIV, these women are at risk of the higher adverse pregnancy outcome. The objectives of the study were to compare the obstetric and neonatal outcome in women who are HIV positive with low-risk HIV negative women and effect of antiretroviral drugs on preterm birth and IUGR. Materials and Methods: This is a retrospective case record analysis of 212 HIV-positive women delivering between 2002 to 2015, in a tertiary health care centre which was compared with 238 HIV-negative controls. Women who underwent medical termination of pregnancy and abortion were excluded from the study. Obstetric outcome analyzed were pregnancy induced hypertension, HIV positive intrauterine growth restriction, preterm birth, anemia, gestational diabetes and intrahepatic cholestasis of pregnancy. Neonatal outcome analysed were birth weight, apgar score, NICU admission and perinatal transmission.HIV-positiveOut of 212 women, 204 received antiretroviral therapy (ART) to prevent MTCT, 27 women received single dose nevirapine (sdNVP) or sdNVP tailed with 7 days of zidovudine and lamivudine (ZDV + 3TC), 15 received ZDV, 82 women received duovir and 80 women received triple drug therapy depending upon the time period of presentation. Results: Mean age of 212 HIV positive women was 25.72+3.6 years, 101 women (47.6 %) were primigravida. HIV positive status was diagnosed during pregnancy in 200 women while 12 women were diagnosed prior to conception. Among 212 HIV positive women, 20 (9.4 %) women had preterm delivery (< 37 weeks), 194 women (91.5 %) delivered by cesarean section and 18 women (8.5 %) delivered vaginally. 178 neonates (83.9 %) received exclusive top feeding and 34 neonates (16.03 %) received exclusive breast feeding. When compared to low risk HIV negative women (n=238), HIV positive women were more likely to deliver preterm (OR 1.27), have anemia (OR 1.39) and intrauterine growth restriction (OR 2.07). Incidence of pregnancy induced hypertension, diabetes mellitus and ICP was not increased. Mean birth weight was significantly lower in HIV positive women (2593.60+499 gm) when compared to HIV negative women (2919+459 gm). Complete follow up is available for 148 neonates till date, rest are under evaluation. Out of these 7 neonates found to have HIV positive status. Risk of preterm birth (P value = 0.039) and IUGR (P value = 0.739) was higher in HIV positive women who did not receive any ART during pregnancy than women who received ART. Conclusion: HIV positive pregnant women are at increased risk of adverse pregnancy outcome. Multidisciplinary team approach and use of highly active antiretroviral therapy can optimize the maternal and perinatal outcome.Keywords: antiretroviral therapy, HIV infection, IUGR, preterm birth
Procedia PDF Downloads 260718 Low-Temperature Poly-Si Nanowire Junctionless Thin Film Transistors with Nickel Silicide
Authors: Yu-Hsien Lin, Yu-Ru Lin, Yung-Chun Wu
Abstract:
This work demonstrates the ultra-thin poly-Si (polycrystalline Silicon) nanowire junctionless thin film transistors (NWs JL-TFT) with nickel silicide contact. For nickel silicide film, this work designs to use two-step annealing to form ultra-thin, uniform and low sheet resistance (Rs) Ni silicide film. The NWs JL-TFT with nickel silicide contact exhibits the good electrical properties, including high driving current (>10⁷ Å), subthreshold slope (186 mV/dec.), and low parasitic resistance. In addition, this work also compares the electrical characteristics of NWs JL-TFT with nickel silicide and non-silicide contact. Nickel silicide techniques are widely used for high-performance devices as the device scaling due to the source/drain sheet resistance issue. Therefore, the self-aligned silicide (salicide) technique is presented to reduce the series resistance of the device. Nickel silicide has several advantages including low-temperature process, low silicon consumption, no bridging failure property, smaller mechanical stress, and smaller contact resistance. The junctionless thin-film transistor (JL-TFT) is fabricated simply by heavily doping the channel and source/drain (S/D) regions simultaneously. Owing to the special doping profile, JL-TFT has some advantages such as lower thermal the budget which can integrate with high-k/metal-gate easier than conventional MOSFETs (Metal Oxide Semiconductor Field-Effect Transistors), longer effective channel length than conventional MOSFETs, and avoidance of complicated source/drain engineering. To solve JL-TFT has turn-off problem, JL-TFT needs ultra-thin body (UTB) structure to reach fully depleted channel region in off-state. On the other hand, the drive current (Iᴅ) is declined as transistor features are scaled. Therefore, this work demonstrates ultra thin poly-Si nanowire junctionless thin film transistors with nickel silicide contact. This work investigates the low-temperature formation of nickel silicide layer by physical-chemical deposition (PVD) of a 15nm Ni layer on the poly-Si substrate. Notably, this work designs to use two-step annealing to form ultrathin, uniform and low sheet resistance (Rs) Ni silicide film. The first step was promoted Ni diffusion through a thin interfacial amorphous layer. Then, the unreacted metal was lifted off after the first step. The second step was annealing for lower sheet resistance and firmly merged the phase.The ultra-thin poly-Si nanowire junctionless thin film transistors NWs JL-TFT with nickel silicide contact is demonstrated, which reveals high driving current (>10⁷ Å), subthreshold slope (186 mV/dec.), and low parasitic resistance. In silicide film analysis, the second step of annealing was applied to form lower sheet resistance and firmly merge the phase silicide film. In short, the NWs JL-TFT with nickel silicide contact has exhibited a competitive short-channel behavior and improved drive current.Keywords: poly-Si, nanowire, junctionless, thin-film transistors, nickel silicide
Procedia PDF Downloads 237717 Phytochemical Screening and in vitro Antibacterial and Antioxidant Potential of Microalgal Strain, Cymbella
Authors: S. Beekrum, B. Odhav, R. Lalloo, E. O. Amonsou
Abstract:
Marine microalgae are rich sources of the novel and biologically active metabolites; therefore, they may be used in the food industry as natural food ingredients and functional foods. They have several biological applications related with health benefits, among others. In the past decades, food scientists have been searching for natural alternatives to replace synthetic antioxidants. The use of synthetic antioxidants has decreased due to their suspected activity as promoters of carcinogenesis, as well as consumer rejection of synthetic food additives. The aim of the study focused on screening of phytochemicals from Cymbella biomass extracts, and to examine the in vitro antioxidant and antimicrobial potential. Cymbella biomass was obtained from CSIR (South Africa), and four different solvents namely methanol, acetone, n-hexane and water were used for extraction. To take into account different antioxidant mechanisms, seven different antioxidant assays were carried out. These include free radical scavenging (DPPH assay), Trolox equivalent antioxidant capacity (TEAC assay), radical cation (ABTS assay), superoxide anion radical scavenging, reducing power, determination of total phenolic compounds and determination of total flavonoid content. The total content of phenol and flavonoid in extracts were determined as gallic acid equivalent, and as rutin equivalent, respectively. The in vitro antimicrobial effect of extracts were tested against some pathogens (Staphylococcus aureus, Listeria monocytogenes, Bacillus subtilis, Salmonella enteritidis, Escherichia coli, Pseudomonas aeruginosa and Candida albicans), using the disc diffusion assay. Qualitative analyses of phytochemicals were conducted by chemical tests to screen for the presence of tannins, flavonoids, terpenoids, phenols, steroids, saponins, glycosides and alkaloids. The present investigation revealed that all extracts showed relatively strong antibacterial activity against most of the tested bacteria. The methanolic extract of the biomass contained a significantly high phenolic content of 111.46 mg GAE/g, and the hexane extract contained 65.279 mg GAE/g. Results of the DPPH assay showed that the biomass contained strong antioxidant capacity, 79% in the methanolic extract and 85% in the hexane extract. Extracts have displayed effective reducing power and superoxide anion radical scavenging. Results of this study have highlighted potential antioxidant activity in the methanol and hexane extracts. The obtained results of the phytochemical screening showed the presence of terpenoids, flavonoids, phenols and saponins. The use of Cymbella as a natural antioxidant source and a potential source of antibacterial compounds and phytochemicals in the food industry appears promising and should be investigated further.Keywords: antioxidants, antimicrobial, Cymbella, microalgae, phytochemicals
Procedia PDF Downloads 455716 Inherent Difficulties in Countering Islamophobia
Authors: Imbesat Daudi
Abstract:
Islamophobia, which is a billion-dollar industry, is widespread, especially in the United States, Europe, India, Israel, and countries that have Muslim minorities at odds with their governmental policies. Hatred of Islam in the West did not evolve spontaneously; it was methodically created. Islamophobia's current format has been designed to spread on its own, find a space in the Western psyche, and resist its eradication. Hatred has been sustained by neoconservative ideologues and their allies, which are supported by the mainstream media. Social scientists have evaluated how ideas spread, why any idea can go viral, and where new ideas find space in our brains. This was possible because of the advances in the computational power of software and computers. Spreading of ideas, including Islamophobia, follows a sine curve; it has three phases: An initial exploratory phase with a long lag period, an explosive phase if ideas go viral, and the final phase when ideas find space in the human psyche. In the initial phase, the ideas are quickly examined in a center in the prefrontal lobe. When it is deemed relevant, it is sent for evaluation to another center of the prefrontal lobe; there, it is critically examined. Once it takes a final shape, the idea is sent as a final product to a center in the occipital lobe. This center cannot critically evaluate ideas; it can only defend them from its critics. Counterarguments, no matter how scientific, are automatically rejected. Therefore, arguments that could be highly effective in the early phases are counterproductive once they are stored in the occipital lobe. Anti-Islamophobic intellectuals have done a very good job of countering Islamophobic arguments. However, they have not been as effective as neoconservative ideologues who have promoted anti-Muslim rhetoric that was based on half-truths, misinformation, or outright lies. The failure is partly due to the support pro-war activists receive from the mainstream media, state institutions, mega-corporations engaged in violent conflicts, and think tanks that provide Islamophobic arguments. However, there are also scientific reasons why anti-Islamophobic thinkers have been less effective. There are different dynamics of spreading ideas once they are stored in the occipital lobe. The human brain is incapable of evaluating further once it accepts ideas as its own; therefore, a different strategy is required to be effective. This paper examines 1) why anti-Islamophobic intellectuals have failed in changing the minds of non-Muslims and 2) the steps of countering hatred. Simply put, a new strategy is needed that can effectively counteract hatred of Islam and Muslims. Islamophobia is a disease that requires strong measures. Fighting hatred is always a challenge, but if we understand why Islamophobia is taking root in the twenty-first century, one can succeed in challenging Islamophobic arguments. That will need a coordinated effort of Intellectuals, writers and the media.Keywords: islamophobia, Islam and violence, anti-islamophobia, demonization of Islam
Procedia PDF Downloads 48715 Bio-Medical Equipment Technicians: Crucial Workforce to Improve Quality of Health Services in Rural Remote Hospitals in Nepal
Authors: C. M. Sapkota, B. P. Sapkota
Abstract:
Background: Continuous developments in science and technology are increasing the availability of thousands of medical devices – all of which should be of good quality and used appropriately to address global health challenges. It is obvious that bio medical devices are becoming ever more indispensable in health service delivery and among the key workforce responsible for their design, development, regulation, evaluation and training in their use: biomedical technician (BMET) is the crucial. As a pivotal member of health workforce, biomedical technicians are an essential component of the quality health service delivery mechanism supporting the attainment of the Sustainable Development Goals. Methods: The study was based on cross sectional descriptive design. Indicators measuring the quality of health services were assessed in Mechi Zonal Hospital (MZH) and Sagarmatha Zonal Hospital (SZH). Indicators were calculated based on the data about hospital utilization and performance of 2018 available in Medical record section of both hospitals. MZH had employed the BMET during 2018 but SZH had no BMET in 2018.Focus Group Discussion with health workers in both hospitals was conducted to validate the hospital records. Client exit interview was conducted to assess the level of client satisfaction in both the hospitals. Results: In MZH there was round the clock availability and utilization of Radio diagnostics equipment, Laboratory equipment. Operation Theater was functional throughout the year. Bed Occupancy rate in MZH was 97% but in SZH it was only 63%.In SZH, OT was functional only 54% of the days in 2018. CT scan machine was just installed but not functional. Computerized X-Ray in SZH was functional only in 72% of the days. Level of client satisfaction was 87% in MZH but was just 43% in SZH. MZH performed all (256) the Caesarean Sections but SZH performed only 36% of 210 Caesarean Sections in 2018. In annual performance ranking of Government Hospitals, MZH was placed in 1st rank while as SZH was placed in 19th rank out of 32 referral hospitals nationwide in 2018. Conclusion: Biomedical technicians are the crucial member of the human resource for health team with the pivotal role. Trained and qualified BMET professionals are required within health-care systems in order to design, evaluate, regulate, acquire, maintain, manage and train on safe medical technologies. Applying knowledge of engineering and technology to health-care systems to ensure availability, affordability, accessibility, acceptability and utilization of the safer, higher quality, effective, appropriate and socially acceptable bio medical technology to populations for preventive, promotive, curative, rehabilitative and palliative care across all levels of the health service delivery.Keywords: biomedical equipment technicians, BMET, human resources for health, HRH, quality health service, rural hospitals
Procedia PDF Downloads 126714 A Public Health Perspective on Deradicalisation: Re-Conceptualising Deradicalisation Approaches
Authors: Erin Lawlor
Abstract:
In 2008 Time magazine named terrorist rehabilitation as one of the best ideas of the year. The term deradicalisation has become synonymous with rehabilitation within security discourse. The allure for a “quick fix” when managing terrorist populations (particularly within prisons) has led to a focus on prescriptive programmes where there is a distinct lack of exploration into the drivers for a person to disengage or deradicalise from violence. It has been argued that to tackle a snowballing issue that interventions have moved too quickly for both theory development and methodological structure. This overly quick acceptance of a term that lacks rigorous testing, measuring, and monitoring means that there is distinct lack of evidence base for deradicalisation being a genuine process/phenomenon, leading to academics retrospectively attempting to design frameworks and interventions around a concept that is not truly understood. The UK Home Office has openly acknowledged the lack of empirical data on this subject. This lack of evidence has a direct impact on policy and intervention development. Extremism and deradicalisation are issues that affect public health outcomes on a global scale, to the point that terrorism has now been added to the list of causes of trauma, both in the direct form of being victim of an attack but also the indirect context of witnesses, children and ordinary citizens who live in daily fear. This study critiques current deradicalisation discourses to establish whether public health approaches offer opportunities for development. The research begins by exploring the theoretical constructs of both what deradicalisation, and public health issues are. Questioning: What does deradicalisation involve? Is there an evidential base on which deradicalisation theory has established itself? What theory are public health interventions devised from? What does success look like in both fields? From establishing this base, current deradicalisation practices will then be explored through examples of work already being carried out. Critiques can be broken into discussion points of: Language, the difficulties with conducting empirical studies and the issues around outcome measurements that deradicalisation interventions face. This study argues that a public health approach towards deradicalisation offers the opportunity to attempt to bring clarity to the definitions of radicalisation, identify what could be modified through intervention and offer insights into the evaluation of interventions. As opposed to simply focusing on an element of deradicalisation and analysing that in isolation, a public health approach allows for what the literature has pointed out is missing, a comprehensive analysis of current interventions and information on creating efficacy monitoring systems. Interventions, policies, guidance, and practices in both the UK and Australia will be compared and contrasted, due to the joint nature of this research between Sheffield Hallam University and La Trobe, Melbourne.Keywords: radicalisation, deradicalisation, violent extremism, public health
Procedia PDF Downloads 66713 Classification of Foliar Nitrogen in Common Bean (Phaseolus Vulgaris L.) Using Deep Learning Models and Images
Authors: Marcos Silva Tavares, Jamile Raquel Regazzo, Edson José de Souza Sardinha, Murilo Mesquita Baesso
Abstract:
Common beans are a widely cultivated and consumed legume globally, serving as a staple food for humans, especially in developing countries, due to their nutritional characteristics. Nitrogen (N) is the most limiting nutrient for productivity, and foliar analysis is crucial to ensure balanced nitrogen fertilization. Excessive N applications can cause, either isolated or cumulatively, soil and water contamination, plant toxicity, and increase their susceptibility to diseases and pests. However, the quantification of N using conventional methods is time-consuming and costly, demanding new technologies to optimize the adequate supply of N to plants. Thus, it becomes necessary to establish constant monitoring of the foliar content of this macronutrient in plants, mainly at the V4 stage, aiming at precision management of nitrogen fertilization. In this work, the objective was to evaluate the performance of a deep learning model, Resnet-50, in the classification of foliar nitrogen in common beans using RGB images. The BRS Estilo cultivar was sown in a greenhouse in a completely randomized design with four nitrogen doses (T1 = 0 kg N ha-1, T2 = 25 kg N ha-1, T3 = 75 kg N ha-1, and T4 = 100 kg N ha-1) and 12 replications. Pots with 5L capacity were used with a substrate composed of 43% soil (Neossolo Quartzarênico), 28.5% crushed sugarcane bagasse, and 28.5% cured bovine manure. The water supply of the plants was done with 5mm of water per day. The application of urea (45% N) and the acquisition of images occurred 14 and 32 days after sowing, respectively. A code developed in Matlab© R2022b was used to cut the original images into smaller blocks, originating an image bank composed of 4 folders representing the four classes and labeled as T1, T2, T3, and T4, each containing 500 images of 224x224 pixels obtained from plants cultivated under different N doses. The Matlab© R2022b software was used for the implementation and performance analysis of the model. The evaluation of the efficiency was done by a set of metrics, including accuracy (AC), F1-score (F1), specificity (SP), area under the curve (AUC), and precision (P). The ResNet-50 showed high performance in the classification of foliar N levels in common beans, with AC values of 85.6%. The F1 for classes T1, T2, T3, and T4 was 76, 72, 74, and 77%, respectively. This study revealed that the use of RGB images combined with deep learning can be a promising alternative to slow laboratory analyses, capable of optimizing the estimation of foliar N. This can allow rapid intervention by the producer to achieve higher productivity and less fertilizer waste. Future approaches are encouraged to develop mobile devices capable of handling images using deep learning for the classification of the nutritional status of plants in situ.Keywords: convolutional neural network, residual network 50, nutritional status, artificial intelligence
Procedia PDF Downloads 19712 MBES-CARIS Data Validation for the Bathymetric Mapping of Shallow Water in the Kingdom of Bahrain on the Arabian Gulf
Authors: Abderrazak Bannari, Ghadeer Kadhem
Abstract:
The objectives of this paper are the validation and the evaluation of MBES-CARIS BASE surface data performance for bathymetric mapping of shallow water in the Kingdom of Bahrain. The latter is an archipelago with a total land area of about 765.30 km², approximately 126 km of coastline and 8,000 km² of marine area, located in the Arabian Gulf, east of Saudi Arabia and west of Qatar (26° 00’ N, 50° 33’ E). To achieve our objectives, bathymetric attributed grid files (X, Y, and depth) generated from the coverage of ship-track MBSE data with 300 x 300 m cells, processed with CARIS-HIPS, were downloaded from the General Bathymetric Chart of the Oceans (GEBCO). Then, brought into ArcGIS and converted into a raster format following five steps: Exportation of GEBCO BASE surface data to the ASCII file; conversion of ASCII file to a points shape file; extraction of the area points covering the water boundary of the Kingdom of Bahrain and multiplying the depth values by -1 to get the negative values. Then, the simple Kriging method was used in ArcMap environment to generate a new raster bathymetric grid surface of 30×30 m cells, which was the basis of the subsequent analysis. Finally, for validation purposes, 2200 bathymetric points were extracted from a medium scale nautical map (1:100 000) considering different depths over the Bahrain national water boundary. The nautical map was scanned, georeferenced and overlaid on the MBES-CARIS generated raster bathymetric grid surface (step 5 above), and then homologous depth points were selected. Statistical analysis, expressed as a linear error at the 95% confidence level, showed a strong correlation coefficient (R² = 0.96) and a low RMSE (± 0.57 m) between the nautical map and derived MBSE-CARIS depths if we consider only the shallow areas with depths of less than 10 m (about 800 validation points). When we consider only deeper areas (> 10 m) the correlation coefficient is equal to 0.73 and the RMSE is equal to ± 2.43 m while if we consider the totality of 2200 validation points including all depths, the correlation coefficient is still significant (R² = 0.81) with satisfactory RMSE (± 1.57 m). Certainly, this significant variation can be caused by the MBSE that did not completely cover the bottom in several of the deeper pockmarks because of the rapid change in depth. In addition, steep slopes and the rough seafloor probably affect the acquired MBSE raw data. In addition, the interpolation of missed area values between MBSE acquisition swaths-lines (ship-tracked sounding data) may not reflect the true depths of these missed areas. However, globally the results of the MBES-CARIS data are very appropriate for bathymetric mapping of shallow water areas.Keywords: bathymetry mapping, multibeam echosounder systems, CARIS-HIPS, shallow water
Procedia PDF Downloads 381711 Repair of Thermoplastic Composites for Structural Applications
Authors: Philippe Castaing, Thomas Jollivet
Abstract:
As a result of their advantages, i.e. recyclability, weld-ability, environmental compatibility, long (continuous) fiber thermoplastic composites (LFTPC) are increasingly used in many industrial sectors (mainly automotive and aeronautic) for structural applications. Indeed, in the next ten years, the environmental rules will put the pressure on the use of new structural materials like composites. In aerospace, more than 50% of the damage are due to stress impact and 85% of damage are repaired on the fuselage (fuselage skin panels and around doors). With the arrival of airplanes mainly of composite materials, replacement of sections or panels seems difficult economically speaking and repair becomes essential. The objective of the present study is to propose a solution of repair to prevent the replacement the damaged part in thermoplastic composites in order to recover the initial mechanical properties. The classification of impact damage is not so not easy : talking about low energy impact (less than 35 J) can be totally wrong when high speed or weak thicknesses as well as thermoplastic resins are considered. Crash and perforation with higher energy create important damages and the structures are replaced without repairing, so we just consider here damages due to impacts at low energy that are as follows for laminates : − Transverse cracking; − Delamination; − Fiber rupture. At low energy, the damages are barely visible but can nevertheless reduce significantly the mechanical strength of the part due to resin cracks while few fiber rupture is observed. The patch repair solution remains the standard one but may lead to the rupture of fibers and consequently creates more damages. That is the reason why we investigate the repair of thermoplastic composites impacted at low energy. Indeed, thermoplastic resins are interesting as they absorb impact energy through plastic strain. The methodology is as follows: - impact tests at low energy on thermoplastic composites; - identification of the damage by micrographic observations; - evaluation of the harmfulness of the damage; - repair by reconsolidation according to the extent of the damage ; -validation of the repair by mechanical characterization (compression). In this study, the impacts tests are performed at various levels of energy on thermoplastic composites (PA/C, PEEK/C and PPS/C woven 50/50 and unidirectional) to determine the level of impact energy creating damages in the resin without fiber rupture. We identify the extent of the damage by US inspection and micrographic observations in the plane part thickness. The samples were in addition characterized in compression to evaluate the loss of mechanical properties. Then the strategy of repair consists in reconsolidating the damaged parts by thermoforming, and after reconsolidation the laminates are characterized in compression for validation. To conclude, the study demonstrates the feasibility of the repair for low energy impact on thermoplastic composites as the samples recover their properties. At a first step of the study, the “repair” is made by reconsolidation on a thermoforming press but we could imagine a process in situ to reconsolidate the damaged parts.Keywords: aerospace, automotive, composites, compression, damages, repair, structural applications, thermoplastic
Procedia PDF Downloads 304710 Photocatalytic Properties of Pt/Er-KTaO3
Authors: Anna Krukowska, Tomasz Klimczuk, Adriana Zaleska-Medynska
Abstract:
Photoactive materials have attracted attention due to their potential application in the degradation of environmental pollutants to non-hazardous compounds in an eco-friendly route. Among semiconductor photocatalysts, tantalates such as potassium tantalate (KTaO3) is one of the excellent functional photomaterial. However, tantalates-based materials are less active under visible-light irradiation, the enhancement in photoactivity could be improved with the modification of opto-eletronic properties of KTaO3 by doping rare earth metal (Er) and further photodeposition of noble metal nanoparticles (Pt). Inclusion of rare earth element in orthorhombic structure of tantalate can generate one high-energy photon by absorbing two or more incident low-energy photons, which convert visible-light and infrared-light into the ultraviolet-light to satisfy the requirement of KTaO3 photocatalysts. On the other hand, depositions of noble metal nanoparticles on the surface of semiconductor strongly absorb visible-light due to their surface plasmon resonance, in which their conducting electrons undergo a collective oscillation induced by electric field of visible-light. Furthermore, the high dispersion of Pt nanoparticles, which will be obtained by photodeposition process is additional important factor to improve the photocatalytic activity. The present work is aimed to study the effect of photocatalytic process of the prepared Er-doped KTaO3 and further incorporation of Pt nanoparticles by photodeposition. Moreover, the research is also studied correlations between photocatalytic activity and physico-chemical properties of obtained Pt/Er-KTaO3 samples. The Er-doped KTaO3 microcomposites were synthesized by a hydrothermal method. Then photodeposition method was used for Pt loading over Er-KTaO3. The structural and optical properties of Pt/Er-KTaO3 photocatalytic were characterized using scanning electron microscope (SEM), X-ray diffraction (XRD), volumetric adsorption method (BET), UV-Vis absorption measurement, Raman spectroscopy and luminescence spectroscopy. The photocatalytic properties of Pt/Er-KTaO3 microcomposites were investigated by degradation of phenol in aqueous phase as model pollutant under visible and ultraviolet-light irradiation. Results of this work show that all the prepared photocatalysis exhibit low BET surface area, although doping of the bare KTaO3 with rare earth element (Er) presents a slight increase in this value. The crystalline structure of Pt/Er-KTaO3 powders exhibited nearly identical positions for the main peak at about 22,8o and the XRD pattern could be assigned to an orthorhombic distorted perovskite structure. The Raman spectra of obtained semiconductors confirmed demonstrating perovskite-like structure. The optical absorption spectra of Pt nanoparticles exhibited plasmon absorption band for main peaks at about 216 and 264 nm. The addition of Pt nanoparticles increased photoactivity compared to Er-KTaO3 and pure KTaO3. Summary optical properties of KTaO3 change with its doping Er-element and further photodeposition of Pt nanoparticles.Keywords: heterogeneous photocatalytic, KTaO3 photocatalysts, Er3+ ion doping, Pt photodeposition
Procedia PDF Downloads 360709 Simulating an Interprofessional Hospital Day Shift: A Student Interprofessional (IP) Collaborative Learning Activity
Authors: Fiona Jensen, Barb Goodwin, Nancy Kleiman, Rhonda Usunier
Abstract:
Background: Clinical simulation is now a common component in many health profession curricula in preparation for clinical practice. In the Rady Faculty of Health Sciences (RFHS) college leads in simulation and interprofessional (IP) education, planned an eight hour simulated hospital day shift, where seventy students from six health professions across two campuses, learned with each other in a safe, realistic environment. Learning about interprofessional collaboration, an expected competency for many health professions upon graduation, was a primary focus of the simulation event. Method: Faculty representatives from the Colleges of Nursing, Medicine, Pharmacy and Rehabilitation Sciences (Physical Therapy, Occupation Therapy, Respiratory Therapy) and Pharmacy worked together to plan the IP event in a simulation facility in the College of Nursing. Each college provided a faculty mentor to guide the same profession students. Students were placed in interprofessional teams consisting of a nurse, physician, pharmacist, and then sharing respiratory, occupational, and physical therapists across the team depending on the needs of the patients. Eight patient scenarios were role played by health profession students, who had been provided with their patient’s story shortly before the event. Each team was guided by a facilitator. Results and Outcomes: On the morning of the event, all students gathered in a large group to meet mentors and facilitators and have a brief overview of the six competencies for effective collaboration and the session objectives. The students assuming their same profession roles were provided with their patient’s chart at the beginning of the shift, met with their team, and then completed professional specific assessments. Shortly into the shift, IP team rounds began, facilitated by the team facilitator. During the shift, each patient role-played a spontaneous health incident, which required collaboration between the IP team members for assessment and management. The afternoon concluded with team rounds, a collaborative management plan, and a facilitated de-brief. Conclusions: During the de-brief sessions, students responded to set questions related to the session learning objectives and expressed many positive learning moments. We believe that we have a sustainable simulation IP collaborative learning opportunity, which can be embedded into curricula, and has the capacity to grow to include more health profession faculties and students. Opportunities are being explored in the RFHS at the administrative level, to offer this event more frequently in the academic year to reach more students. In addition, a formally structured event evaluation tool would provide important feedback and inform the qualitative feedback to event organizers and the colleges about the significance of the simulation event to student learning.Keywords: simulation, collaboration, teams, interprofessional
Procedia PDF Downloads 130708 Precise Determination of the Residual Stress Gradient in Composite Laminates Using a Configurable Numerical-Experimental Coupling Based on the Incremental Hole Drilling Method
Authors: A. S. Ibrahim Mamane, S. Giljean, M.-J. Pac, G. L’Hostis
Abstract:
Fiber reinforced composite laminates are particularly subject to residual stresses due to their heterogeneity and the complex chemical, mechanical and thermal mechanisms that occur during their processing. Residual stresses are now well known to cause damage accumulation, shape instability, and behavior disturbance in composite parts. Many works exist in the literature on techniques for minimizing residual stresses in thermosetting and thermoplastic composites mainly. To study in-depth the influence of processing mechanisms on the formation of residual stresses and to minimize them by establishing a reliable correlation, it is essential to be able to measure very precisely the profile of residual stresses in the composite. Residual stresses are important data to consider when sizing composite parts and predicting their behavior. The incremental hole drilling is very effective in measuring the gradient of residual stresses in composite laminates. This method is semi-destructive and consists of drilling incrementally a hole through the thickness of the material and measuring relaxation strains around the hole for each increment using three strain gauges. These strains are then converted into residual stresses using a matrix of coefficients. These coefficients, called calibration coefficients, depending on the diameter of the hole and the dimensions of the gauges used. The reliability of the incremental hole drilling depends on the accuracy with which the calibration coefficients are determined. These coefficients are calculated using a finite element model. The samples’ features and the experimental conditions must be considered in the simulation. Any mismatch can lead to inadequate calibration coefficients, thus introducing errors on residual stresses. Several calibration coefficient correction methods exist for isotropic material, but there is a lack of information on this subject concerning composite laminates. In this work, a Python program was developed to automatically generate the adequate finite element model. This model allowed us to perform a parametric study to assess the influence of experimental errors on the calibration coefficients. The results highlighted the sensitivity of the calibration coefficients to the considered errors and gave an order of magnitude of the precisions required on the experimental device to have reliable measurements. On the basis of these results, improvements were proposed on the experimental device. Furthermore, a numerical method was proposed to correct the calibration coefficients for different types of materials, including thick composite parts for which the analytical approach is too complex. This method consists of taking into account the experimental errors in the simulation. Accurate measurement of the experimental errors (such as eccentricity of the hole, angular deviation of the gauges from their theoretical position, or errors on increment depth) is therefore necessary. The aim is to determine more precisely the residual stresses and to expand the validity domain of the incremental hole drilling technique.Keywords: fiber reinforced composites, finite element simulation, incremental hole drilling method, numerical correction of the calibration coefficients, residual stresses
Procedia PDF Downloads 132707 The Power of in situ Characterization Techniques in Heterogeneous Catalysis: A Case Study of Deacon Reaction
Authors: Ramzi Farra, Detre Teschner, Marc Willinger, Robert Schlögl
Abstract:
Introduction: The conventional approach of characterizing solid catalysts under static conditions, i.e., before and after reaction, does not provide sufficient knowledge on the physicochemical processes occurring under dynamic conditions at the molecular level. Hence, the necessity of improving new in situ characterizing techniques with the potential of being used under real catalytic reaction conditions is highly desirable. In situ Prompt Gamma Activation Analysis (PGAA) is a rapidly developing chemical analytical technique that enables us experimentally to assess the coverage of surface species under catalytic turnover and correlate these with the reactivity. The catalytic HCl oxidation (Deacon reaction) over bulk ceria will serve as our example. Furthermore, the in situ Transmission Electron Microscopy is a powerful technique that can contribute to the study of atmosphere and temperature induced morphological or compositional changes of a catalyst at atomic resolution. The application of such techniques (PGAA and TEM) will pave the way to a greater and deeper understanding of the dynamic nature of active catalysts. Experimental/Methodology: In situ Prompt Gamma Activation Analysis (PGAA) experiments were carried out to determine the Cl uptake and the degree of surface chlorination under reaction conditions by varying p(O2), p(HCl), p(Cl2), and the reaction temperature. The abundance and dynamic evolution of OH groups on working catalyst under various steady-state conditions were studied by means of in situ FTIR with a specially designed homemade transmission cell. For real in situ TEM we use a commercial in situ holder with a home built gas feeding system and gas analytics. Conclusions: Two complimentary in situ techniques, namely in situ PGAA and in situ FTIR were utilities to investigate the surface coverage of the two most abundant species (Cl and OH). The OH density and Cl uptake were followed under multiple steady-state conditions as a function of p(O2), p(HCl), p(Cl2), and temperature. These experiments have shown that, the OH density positively correlates with the reactivity whereas Cl negatively. The p(HCl) experiments give rise to increased activity accompanied by Cl-coverage increase (opposite trend to p(O2) and T). Cl2 strongly inhibits the reaction, but no measurable increase of the Cl uptake was found. After considering all previous observations we conclude that only a minority of the available adsorption sites contribute to the reactivity. In addition, the mechanism of the catalysed reaction was proposed. The chlorine-oxygen competition for the available active sites renders re-oxidation as the rate-determining step of the catalysed reaction. Further investigations using in situ TEM are planned and will be conducted in the near future. Such experiments allow us to monitor active catalysts at the atomic scale under the most realistic conditions of temperature and pressure. The talk will shed a light on the potential and limitations of in situ PGAA and in situ TEM in the study of catalyst dynamics.Keywords: CeO2, deacon process, in situ PGAA, in situ TEM, in situ FTIR
Procedia PDF Downloads 291706 Determinants of Quality of Life in Patients with Atypical Prarkinsonian Syndromes: 1-Year Follow-Up Study
Authors: Tatjana Pekmezovic, Milica Jecmenica-Lukic, Igor Petrovic, Vladimir Kostic
Abstract:
Background: A group of atypical parkinsonian syndromes (APS) includes a variety of rare neurodegenerative disorders characterized by reduced life expectancy, increasing disability, and considerable impact on health-related quality of life (HRQoL). Aim: In this study we wanted to answer two questions: a) which demographic and clinical factors are main contributors of HRQoL in our cohort of patients with APS, and b) how does quality of life of these patients change over 1-year follow-up period. Patients and Methods: We conducted a prospective cohort study in hospital settings. The initial study comprised all consecutive patients who were referred to the Department of Movement Disorders, Clinic of Neurology, Clinical Centre of Serbia, Faculty of Medicine, University of Belgrade (Serbia), from January 31, 2000 to July 31, 2013, with the initial diagnoses of ‘Parkinson’s disease’, ‘parkinsonism’, ‘atypical parkinsonism’ and ‘parkinsonism plus’ during the first 8 months from the appearance of first symptom(s). The patients were afterwards regularly followed in 4-6 month intervals and eventually the diagnoses were established for 46 patients fulfilling the criteria for clinically probable progressive supranuclear palsy (PSP) and 36 patients for probable multiple system atrophy (MSA). The health-related quality of life was assessed by using the SF-36 questionnaire (Serbian translation). Hierarchical multiple regression analysis was conducted to identify predictors of composite scores of SF-36. The importance of changes in quality of life scores of patients with APS between baseline and follow-up time-point were quantified using Wilcoxon Signed Ranks Test. The magnitude of any differences for the quality of life changes was calculated as an effect size (ES). Results: The final models of hierarchical regression analysis showed that apathy measured by the Apathy evaluation scale (AES) score accounted for 59% of the variance in the Physical Health Composite Score of SF-36 and 14% of the variance in the Mental Health Composite Score of SF-36 (p<0.01). The changes in HRQoL were assessed in 52 patients with APS who completed 1-year follow-up period. The analysis of magnitude for changes in HRQoL during one-year follow-up period have shown sustained medium ES (0.50-0.79) for both Physical and Mental health composite scores, total quality of life as well as for the Physical Health, Vitality, Role Emotional and Social Functioning. Conclusion: This study provides insight into new potential predictors of HRQoL and its changes over time in patients with APS. Additionally, identification of both prognostic markers of a poor HRQoL and magnitude of its changes should be considered when developing comprehensive treatment-related strategies and health care programs aimed at improving HRQoL and well-being in patients with APS.Keywords: atypical parkinsonian syndromes, follow-up study, quality of life, APS
Procedia PDF Downloads 305705 Estimating the Efficiency of a Meta-Cognitive Intervention Program to Reduce the Risk Factors of Teenage Drivers with Attention Deficit Hyperactivity Disorder While Driving
Authors: Navah Z. Ratzon, Talia Glick, Iris Manor
Abstract:
Attention Deficit Hyperactivity Disorder (ADHD) is a chronic disorder that affects the sufferer’s functioning throughout life and in various spheres of activity, including driving. Difficulties in cognitive functioning and executive functions are often part and parcel of the ADHD diagnosis, and thus form a risk factor in driving. Studies examining the effectiveness of intervention programs for improving and rehabilitating driving in typical teenagers have been conducted in relatively small numbers; while studies on similar programs for teenagers with ADHD have been especially scarce. The aim of the present study has been to examine the effectiveness of a metacognitive occupational therapy intervention program for reducing risk factors in driving among teenagers with ADHD. The present study included 37 teenagers aged 17 to 19. They included 23 teenagers with ADHD divided into experimental (11) and control (12) groups; as well as 14 non-ADHD teenagers forming a second control group. All teenagers taking part in the study were examined in the Tel Aviv University driving lab, and underwent cognitive diagnoses and a driving simulator test. Every subject in the intervention group took part in 3 assessment meetings, and two metacognitive treatment meetings. The control groups took part in two assessment meetings with a follow-up meeting 3 months later. In all the study’s groups, the treatment’s effectiveness was tested by comparing monitoring results on the driving simulator at the first and second evaluations. In addition, the driving of 5 subjects from the intervention group was monitored continuously from a month prior to the start of the intervention, a month during the phase of the intervention and another month until the end of the intervention. In the ADHD control group, the driving of 4 subjects was monitored from the end of the first evaluation for a period of 3 months. The study’s findings were affected by the fact that the ADHD control group was different from the two other groups, and exhibited ADHD characteristics manifested by impaired executive functions and lower metacognitive abilities relative to their peers. The study found partial, moderate, non-significant correlations between driving skills and cognitive functions, executive functions, and perceptions and attitudes towards driving. According to the driving simulator test results and the limited sampling results of actual driving, it was found that a metacognitive occupational therapy intervention may be effective in reducing risk factors in driving among teenagers with ADHD relative to their peers with and without ADHD. In summary, the results of the present study indicate a positive direction that speaks to the viability of using a metacognitive occupational therapy intervention program for reducing risk factors in driving. A further study is required that will include a bigger number of subjects, add actual driving monitoring hours, and assign subjects randomly to the various groups.Keywords: ADHD, driving, driving monitoring, metacognitive intervention, occupational therapy, simulator, teenagers
Procedia PDF Downloads 306704 Influence of Iron Content in Carbon Nanotubes on the Intensity of Hyperthermia in the Cancer Treatment
Authors: S. Wiak, L. Szymanski, Z. Kolacinski, G. Raniszewski, L. Pietrzak, Z. Staniszewska
Abstract:
The term ‘cancer’ is given to a collection of related diseases that may affect any part of the human body. It is a pathological behaviour of cells with the potential to undergo abnormal breakdown in the processes that control cell proliferation, differentiation, and death of particular cells. Although cancer is commonly considered as modern disease, there are beliefs that drastically growing number of new cases can be linked to the extensively prolonged life expectancy and enhanced techniques for cancer diagnosis. Magnetic hyperthermia therapy is a novel approach to cancer treatment, which may greatly contribute to higher efficiency of the therapy. Employing carbon nanotubes as nanocarriers for magnetic particles, it is possible to decrease toxicity and invasiveness of the treatment by surface functionalisation. Despite appearing in recent years, magnetic particle hyperthermia has already become of the highest interest in the scientific and medical environment. The reason why hyperthermia therapy brings so much hope for future treatment of cancer lays in the effect that it produces in malignant cells. Subjecting them to thermal shock results in activation of numerous degradation processes inside and outside the cell. The heating process initiates mechanisms of DNA destruction, protein denaturation and induction of cell apoptosis, which may lead to tumour shrinkage, and in some cases, it may even cause complete disappearance of cancer. The factors which have the major impact on the final efficiency of the treatment include temperatures generated inside the tissues, time of exposure to the heating process, and the character of an individual cancer cell type. The vast majority of cancer cells is characterised by lower pH, persistent hypoxia and lack of nutrients, which can be associated to abnormal microvasculature. Since in healthy tissues we cannot observe presence of these conditions, they should not be seriously affected by elevation of the temperature. The aim of this work is to investigate the influence of iron content in iron filled Carbon Nanotubes on the desired nanoparticles for cancer therapy. In the article, the development and demonstration of the method and the model device for hyperthermic selective destruction of cancer cells are presented. This method was based on the synthesis and functionalization of carbon nanotubes serving as ferromagnetic material nanocontainers. The methodology of the production carbon- ferromagnetic nanocontainers (FNCs) includes the synthesis of carbon nanotubes, chemical, and physical characterization, increasing the content of a ferromagnetic material and biochemical functionalization involving the attachment of the key addresses. The ferromagnetic nanocontainers were synthesised in CVD and microwave plasma system. The research work has been financed from the budget of science as a research project No. PBS2/A5/31/2013.Keywords: hyperthermia, carbon nanotubes, cancer colon cells, radio frequency field
Procedia PDF Downloads 122703 An Evaluation of the Artificial Neural Network and Adaptive Neuro Fuzzy Inference System Predictive Models for the Remediation of Crude Oil-Contaminated Soil Using Vermicompost
Authors: Precious Ehiomogue, Ifechukwude Israel Ahuchaogu, Isiguzo Edwin Ahaneku
Abstract:
Vermicompost is the product of the decomposition process using various species of worms, to create a mixture of decomposing vegetable or food waste, bedding materials, and vemicast. This process is called vermicomposting, while the rearing of worms for this purpose is called vermiculture. Several works have verified the adsorption of toxic metals using vermicompost but the application is still scarce for the retention of organic compounds. This research brings to knowledge the effectiveness of earthworm waste (vermicompost) for the remediation of crude oil contaminated soils. The remediation methods adopted in this study were two soil washing methods namely, batch and column process which represent laboratory and in-situ remediation. Characterization of the vermicompost and crude oil contaminated soil were performed before and after the soil washing using Fourier transform infrared (FTIR), scanning electron microscopy (SEM), X-ray fluorescence (XRF), X-ray diffraction (XRD) and Atomic adsorption spectrometry (AAS). The optimization of washing parameters, using response surface methodology (RSM) based on Box-Behnken Design was performed on the response from the laboratory experimental results. This study also investigated the application of machine learning models [Artificial neural network (ANN), Adaptive neuro fuzzy inference system (ANFIS). ANN and ANFIS were evaluated using the coefficient of determination (R²) and mean square error (MSE)]. Removal efficiency obtained from the Box-Behnken design experiment ranged from 29% to 98.9% for batch process remediation. Optimization of the experimental factors carried out using numerical optimization techniques by applying desirability function method of the response surface methodology (RSM) produce the highest removal efficiency of 98.9% at absorbent dosage of 34.53 grams, adsorbate concentration of 69.11 (g/ml), contact time of 25.96 (min), and pH value of 7.71, respectively. Removal efficiency obtained from the multilevel general factorial design experiment ranged from 56% to 92% for column process remediation. The coefficient of determination (R²) for ANN was (0.9974) and (0.9852) for batch and column process, respectively, showing the agreement between experimental and predicted results. For batch and column precess, respectively, the coefficient of determination (R²) for RSM was (0.9712) and (0.9614), which also demonstrates agreement between experimental and projected findings. For the batch and column processes, the ANFIS coefficient of determination was (0.7115) and (0.9978), respectively. It can be concluded that machine learning models can predict the removal of crude oil from polluted soil using vermicompost. Therefore, it is recommended to use machines learning models to predict the removal of crude oil from contaminated soil using vermicompost.Keywords: ANFIS, ANN, crude-oil, contaminated soil, remediation and vermicompost
Procedia PDF Downloads 111702 Boussinesq Model for Dam-Break Flow Analysis
Authors: Najibullah M, Soumendra Nath Kuiry
Abstract:
Dams and reservoirs are perceived for their estimable alms to irrigation, water supply, flood control, electricity generation, etc. which civilize the prosperity and wealth of society across the world. Meantime the dam breach could cause devastating flood that can threat to the human lives and properties. Failures of large dams remain fortunately very seldom events. Nevertheless, a number of occurrences have been recorded in the world, corresponding in an average to one to two failures worldwide every year. Some of those accidents have caused catastrophic consequences. So it is decisive to predict the dam break flow for emergency planning and preparedness, as it poses high risk to life and property. To mitigate the adverse impact of dam break, modeling is necessary to gain a good understanding of the temporal and spatial evolution of the dam-break floods. This study will mainly deal with one-dimensional (1D) dam break modeling. Less commonly used in the hydraulic research community, another possible option for modeling the rapidly varied dam-break flows is the extended Boussinesq equations (BEs), which can describe the dynamics of short waves with a reasonable accuracy. Unlike the Shallow Water Equations (SWEs), the BEs taken into account the wave dispersion and non-hydrostatic pressure distribution. To capture the dam-break oscillations accurately it is very much needed of at least fourth-order accurate numerical scheme to discretize the third-order dispersion terms present in the extended BEs. The scope of this work is therefore to develop an 1D fourth-order accurate in both space and time Boussinesq model for dam-break flow analysis by using finite-volume / finite difference scheme. The spatial discretization of the flux and dispersion terms achieved through a combination of finite-volume and finite difference approximations. The flux term, was solved using a finite-volume discretization whereas the bed source and dispersion term, were discretized using centered finite-difference scheme. Time integration achieved in two stages, namely the third-order Adams Basforth predictor stage and the fourth-order Adams Moulton corrector stage. Implementation of the 1D Boussinesq model done using PYTHON 2.7.5. Evaluation of the performance of the developed model predicted as compared with the volume of fluid (VOF) based commercial model ANSYS-CFX. The developed model is used to analyze the risk of cascading dam failures similar to the Panshet dam failure in 1961 that took place in Pune, India. Nevertheless, this model can be used to predict wave overtopping accurately compared to shallow water models for designing coastal protection structures.Keywords: Boussinesq equation, Coastal protection, Dam-break flow, One-dimensional model
Procedia PDF Downloads 232701 Adapting Cyber Physical Production Systems to Small and Mid-Size Manufacturing Companies
Authors: Yohannes Haile, Dipo Onipede, Jr., Omar Ashour
Abstract:
The main thrust of our research is to determine Industry 4.0 readiness of small and mid-size manufacturing companies in our region and assist them to implement Cyber Physical Production System (CPPS) capabilities. Adopting CPPS capabilities will help organizations realize improved quality, order delivery, throughput, new value creation, and reduced idle time of machines and work centers of their manufacturing operations. The key metrics for the assessment include the level of intelligence, internal and external connections, responsiveness to internal and external environmental changes, capabilities for customization of products with reference to cost, level of additive manufacturing, automation, and robotics integration, and capabilities to manufacture hybrid products in the near term, where near term is defined as 0 to 18 months. In our initial evaluation of several manufacturing firms which are profitable and successful in what they do, we found low level of Physical-Digital-Physical (PDP) loop in their manufacturing operations, whereas 100% of the firms included in this research have specialized manufacturing core competencies that have differentiated them from their competitors. The level of automation and robotics integration is low to medium range, where low is defined as less than 30%, and medium is defined as 30 to 70% of manufacturing operation to include automation and robotics. However, there is a significant drive to include these capabilities at the present time. As it pertains to intelligence and connection of manufacturing systems, it is observed to be low with significant variance in tying manufacturing operations management to Enterprise Resource Planning (ERP). Furthermore, it is observed that the integration of additive manufacturing in general, 3D printing, in particular, to be low, but with significant upside of integrating it in their manufacturing operations in the near future. To hasten the readiness of the local and regional manufacturing companies to Industry 4.0 and transitions towards CPPS capabilities, our working group (ADMAR Working Group) in partnership with our university have been engaged with the local and regional manufacturing companies. The goal is to increase awareness, share know-how and capabilities, initiate joint projects, and investigate the possibility of establishing the Center for Cyber Physical Production Systems Innovation (C2P2SI). The center is intended to support the local and regional university-industry research of implementing intelligent factories, enhance new value creation through disruptive innovations, the development of hybrid and data enhanced products, and the creation of digital manufacturing enterprises. All these efforts will enhance local and regional economic development and educate students that have well developed knowledge and applications of cyber physical manufacturing systems and Industry 4.0.Keywords: automation, cyber-physical production system, digital manufacturing enterprises, disruptive innovation, new value creation, physical-digital-physical loop
Procedia PDF Downloads 140700 Temperature Dependence of the Optoelectronic Properties of InAs(Sb)-Based LED Heterostructures
Authors: Antonina Semakova, Karim Mynbaev, Nikolai Bazhenov, Anton Chernyaev, Sergei Kizhaev, Nikolai Stoyanov
Abstract:
At present, heterostructures are used for fabrication of almost all types of optoelectronic devices. Our research focuses on the optoelectronic properties of InAs(Sb) solid solutions that are widely used in fabrication of light emitting diodes (LEDs) operating in middle wavelength infrared range (MWIR). This spectral range (2-6 μm) is relevant for laser diode spectroscopy of gases and molecules, for systems for the detection of explosive substances, medical applications, and for environmental monitoring. The fabrication of MWIR LEDs that operate efficiently at room temperature is mainly hindered by the predominance of non-radiative Auger recombination of charge carriers over the process of radiative recombination, which makes practical application of LEDs difficult. However, non-radiative recombination can be partly suppressed in quantum-well structures. In this regard, studies of such structures are quite topical. In this work, electroluminescence (EL) of LED heterostructures based on InAs(Sb) epitaxial films with the molar fraction of InSb ranging from 0 to 0.09 and multi quantum-well (MQW) structures was studied in the temperature range 4.2-300 K. The growth of the heterostructures was performed by metal-organic chemical vapour deposition on InAs substrates. On top of the active layer, a wide-bandgap InAsSb(Ga,P) barrier was formed. At low temperatures (4.2-100 K) stimulated emission was observed. As the temperature increased, the emission became spontaneous. The transition from stimulated emission to spontaneous one occurred at different temperatures for structures with different InSb contents in the active region. The temperature-dependent carrier lifetime, limited by radiative recombination and the most probable Auger processes (for the materials under consideration, CHHS and CHCC), were calculated within the framework of the Kane model. The effect of various recombination processes on the carrier lifetime was studied, and the dominant role of Auger processes was established. For MQW structures quantization energies for electrons, light and heavy holes were calculated. A characteristic feature of the experimental EL spectra of these structures was the presence of peaks with energy different from that of calculated optical transitions between the first quantization levels for electrons and heavy holes. The obtained results showed strong effect of the specific electronic structure of InAsSb on the energy and intensity of optical transitions in nanostructures based on this material. For the structure with MQWs in the active layer, a very weak temperature dependence of EL peak was observed at high temperatures (>150 K), which makes it attractive for fabricating temperature-resistant gas sensors operating in the middle-infrared range.Keywords: Electroluminescence, InAsSb, light emitting diode, quantum wells
Procedia PDF Downloads 212699 In Silico Modeling of Drugs Milk/Plasma Ratio in Human Breast Milk Using Structures Descriptors
Authors: Navid Kaboudi, Ali Shayanfar
Abstract:
Introduction: Feeding infants with safe milk from the beginning of their life is an important issue. Drugs which are used by mothers can affect the composition of milk in a way that is not only unsuitable, but also toxic for infants. Consuming permeable drugs during that sensitive period by mother could lead to serious side effects to the infant. Due to the ethical restrictions of drug testing on humans, especially women, during their lactation period, computational approaches based on structural parameters could be useful. The aim of this study is to develop mechanistic models to predict the M/P ratio of drugs during breastfeeding period based on their structural descriptors. Methods: Two hundred and nine different chemicals with their M/P ratio were used in this study. All drugs were categorized into two groups based on their M/P value as Malone classification: 1: Drugs with M/P>1, which are considered as high risk 2: Drugs with M/P>1, which are considered as low risk Thirty eight chemical descriptors were calculated by ACD/labs 6.00 and Data warrior software in order to assess the penetration during breastfeeding period. Later on, four specific models based on the number of hydrogen bond acceptors, polar surface area, total surface area, and number of acidic oxygen were established for the prediction. The mentioned descriptors can predict the penetration with an acceptable accuracy. For the remaining compounds (N= 147, 158, 160, and 174 for models 1 to 4, respectively) of each model binary regression with SPSS 21 was done in order to give us a model to predict the penetration ratio of compounds. Only structural descriptors with p-value<0.1 remained in the final model. Results and discussion: Four different models based on the number of hydrogen bond acceptors, polar surface area, and total surface area were obtained in order to predict the penetration of drugs into human milk during breastfeeding period About 3-4% of milk consists of lipids, and the amount of lipid after parturition increases. Lipid soluble drugs diffuse alongside with fats from plasma to mammary glands. lipophilicity plays a vital role in predicting the penetration class of drugs during lactation period. It was shown in the logistic regression models that compounds with number of hydrogen bond acceptors, PSA and TSA above 5, 90 and 25 respectively, are less permeable to milk because they are less soluble in the amount of fats in milk. The pH of milk is acidic and due to that, basic compounds tend to be concentrated in milk than plasma while acidic compounds may consist lower concentrations in milk than plasma. Conclusion: In this study, we developed four regression-based models to predict the penetration class of drugs during the lactation period. The obtained models can lead to a higher speed in drug development process, saving energy, and costs. Milk/plasma ratio assessment of drugs requires multiple steps of animal testing, which has its own ethical issues. QSAR modeling could help scientist to reduce the amount of animal testing, and our models are also eligible to do that.Keywords: logistic regression, breastfeeding, descriptors, penetration
Procedia PDF Downloads 71698 Prominent Lipid Parameters Correlated with Trunk-to-Leg and Appendicular Fat Ratios in Severe Pediatric Obesity
Authors: Mustafa M. Donma, Orkide Donma
Abstract:
The examination of both serum lipid fractions and body’s lipid composition are quite informative during the evaluation of obesity stages. Within this context, alterations in lipid parameters are commonly observed. The variations in the fat distribution of the body are also noteworthy. Total cholesterol (TC), triglycerides (TRG), low density lipoprotein-cholesterol (LDL-C), high density lipoprotein-cholesterol (HDL-C) are considered as the basic lipid fractions. Fat deposited in trunk and extremities may give considerable amount of information and different messages during discrete health states. Ratios are also derived from distinct fat distribution in these areas. Trunk-to-leg fat ratio (TLFR) and trunk-to-appendicular fat ratio (TAFR) are the most recently introduced ratios. In this study, lipid fractions and TLFR, as well as TAFR, were evaluated, and the distinctions among healthy, obese (OB), and morbid obese (MO) groups were investigated. Three groups [normal body mass index (N-BMI), OB, MO] were constituted from a population aged 6 to 18 years. Ages and sexes of the groups were matched. The study protocol was approved by the Non-interventional Ethics Committee of Tekirdag Namik Kemal University. Written informed consent forms were obtained from the parents of the participants. Anthropometric measurements (height, weight, waist circumference, hip circumference, head circumference, neck circumference) were obtained and recorded during the physical examination. Body mass index values were calculated. Total, trunk, leg, and arm fat mass values were obtained by TANITA Bioelectrical Impedance Analysis. These values were used to calculate TLFR and TAFR. Systolic (SBP) and diastolic blood pressures (DBP) were measured. Routine biochemical tests including TC, TRG, LDL-C, HDL-C, and insulin were performed. Data were evaluated using SPSS software. p value smaller than 0.05 was accepted as statistically significant. There was no difference among the age values and gender ratios of the groups. Any statistically significant difference was not observed in terms of DBP, TLFR as well as serum lipid fractions. Higher SBP values were measured both in OB and MO children than those with N-BMI. TAFR showed a significant difference between N-BMI and OB groups. Statistically significant increases were detected between insulin values of N-BMI group and OB as well as MO groups. There were bivariate correlations between LDL and TLFR (r=0.396; p=0.037) as well as TAFR values (r=0.413; p=0.029) in MO group. When adjusted for SBP and DBP, partial correlations were calculated as (r=0.421; p=0.032) and (r=0.438; p=0.025) for LDL-TLFR as well as LDL-TAFR, respectively. Much stronger partial correlations were obtained for the same couples (r=0.475; p=0.019 and r=0.473; p=0.020, respectively) upon controlling for TRG and HDL-C. Much stronger partial correlations observed in MO children emphasize the potential transition from morbid obesity to metabolic syndrome. These findings have concluded that LDL-C may be suggested as a discriminating parameter between OB and MO children.Keywords: children, lipid parameters, obesity, trunk-to-leg fat ratio, trunk-to-appendicular fat ratio
Procedia PDF Downloads 111697 Aerosol Chemical Composition in Urban Sites: A Comparative Study of Lima and Medellin
Authors: Guilherme M. Pereira, Kimmo Teinïla, Danilo Custódio, Risto Hillamo, Célia Alves, Pérola de C. Vasconcellos
Abstract:
South American large cities often present serious air pollution problems and their atmosphere composition is influenced by a variety of emissions sources. The South American Emissions Megacities, and Climate project (SAEMC) has focused on the study of emissions and its influence on climate in the South American largest cities and it also included Lima (Peru) and Medellin (Colombia), sites where few studies of the genre were done. Lima is a coastal city with more than 8 million inhabitants and the second largest city in South America. Medellin is a 2.5 million inhabitants city and second largest city in Colombia; it is situated in a valley. The samples were collected in quartz fiber filters in high volume samplers (Hi-Vol), in 24 hours of sampling. The samples were collected in intensive campaigns in both sites, in July, 2010. Several species were determined in the aerosol samples of Lima and Medellin. Organic and elemental carbon (OC and EC) in thermal-optical analysis; biomass burning tracers (levoglucosan - Lev, mannosan - Man and galactosan - Gal) in high-performance anion exchange ion chromatography with mass spectrometer detection; water soluble ions in ion chromatography. The average particulate matter was similar for both campaigns, the PM10 concentrations were above the recommended by World Health Organization (50 µg m⁻³ – daily limit) in 40% of the samples in Medellin, while in Lima it was above that value in 15% of the samples. The average total ions concentration was higher in Lima (17450 ng m⁻³ in Lima and 3816 ng m⁻³ in Medellin) and the average concentrations of sodium and chloride were higher in this site, these species also had better correlations (Pearson’s coefficient = 0,63); suggesting a higher influence of marine aerosol in the site due its location in the coast. Sulphate concentrations were also much higher at Lima site; which may be explained by a higher influence of marine originated sulphate. However, the OC, EC and monosaccharides average concentrations were higher at Medellin site; this may be due to the lower dispersion of pollutants due to the site’s location and a larger influence of biomass burning sources. The levoglucosan average concentration was 95 ng m⁻³ for Medellin and 16 ng m⁻³ and OC was well correlated with levoglucosan (Pearson’s coefficient = 0,86) in Medellin; suggesting a higher influence of biomass burning over the organic aerosol in this site. The Lev/Man ratio is often related to the type of biomass burned and was close to 18, similar to the observed in previous studies done at biomass burning impacted sites in the Amazon region; backward trajectories also suggested the transport of aerosol from that region. Biomass burning appears to have a larger influence on the air quality in Medellin, in addition the vehicular emissions; while Lima showed a larger influence of marine aerosol during the study period.Keywords: aerosol transport, atmospheric particulate matter, biomass burning, SAEMC project
Procedia PDF Downloads 263696 Preparation of Metallic Nanoparticles with the Use of Reagents of Natural Origin
Authors: Anna Drabczyk, Sonia Kudlacik-Kramarczyk, Dagmara Malina, Bozena Tyliszczak, Agnieszka Sobczak-Kupiec
Abstract:
Nowadays, nano-size materials are very popular group of materials among scientists. What is more, these materials find an application in a wide range of various areas. Therefore constantly increasing demand for nanomaterials including metallic nanoparticles such as silver of gold ones is observed. Therefore, new routes of their preparation are sought. Considering potential application of nanoparticles, it is important to select an adequate methodology of their preparation because it determines their size and shape. Among the most commonly applied methods of preparation of nanoparticles chemical and electrochemical techniques are leading. However, currently growing attention is directed into the biological or biochemical aspects of syntheses of metallic nanoparticles. This is associated with a trend of developing of new routes of preparation of given compounds according to the principles of green chemistry. These principles involve e.g. the reduction of the use of toxic compounds in the synthesis as well as the reduction of the energy demand or minimization of the generated waste. As a result, a growing popularity of the use of such components as natural plant extracts, infusions or essential oils is observed. Such natural substances may be used both as a reducing agent of metal ions and as a stabilizing agent of formed nanoparticles therefore they can replace synthetic compounds previously used for the reduction of metal ions or for the stabilization of obtained nanoparticles suspension. Methods that proceed in the presence of previously mentioned natural compounds are environmentally friendly and proceed without the application of any toxic reagents. Methodology: Presented research involves preparation of silver nanoparticles using selected plant extracts, e.g. artichoke extract. Extracts of natural origin were used as reducing and stabilizing agents at the same time. Furthermore, syntheses were carried out in the presence of additional polymeric stabilizing agent. Next, such features of obtained suspensions of nanoparticles as total antioxidant activity as well as content of phenolic compounds have been characterized. First of the mentioned studies involved the reaction with DPPH (2,2-Diphenyl-1-picrylhydrazyl) radical. The content of phenolic compounds was determined using Folin-Ciocalteu technique. Furthermore, an essential issue was also the determining of the stability of formed suspensions of nanoparticles. Conclusions: In the research it was demonstrated that metallic nanoparticles may be obtained using plant extracts or infusions as stabilizing or reducing agent. The methodology applied, i.e. a type of plant extract used during the synthesis, had an impact on the content of phenolic compounds as well as on the size and polydispersity of obtained nanoparticles. What is more, it is possible to prepare nano-size particles that will be characterized by properties desirable from the viewpoint of their potential application and such an effect may be achieved with the use of non-toxic reagents of natural origin. Furthermore, proposed methodology stays in line with the principles of green chemistry.Keywords: green chemistry principles, metallic nanoparticles, plant extracts, stabilization of nanoparticles
Procedia PDF Downloads 125695 Saco Sweet Cherry: Phenolic Profile and Biological Activity of Coloured and Non-Coloured Fractions
Authors: Catarina Bento, Ana Carolina Gonçalves, Fábio Jesus, Luís Rodrigues Silva
Abstract:
Increasing evidence suggests that a diet rich in fruits and vegetables plays important roles in the prevention of chronic diseases, such as heart disease, cancer, stroke, diabetes, Alzheimer’s disease, among others. Fruits and vegetables gained prominence due their richness in bioactive compounds, being the focus of many studies due to their biological properties acting as health promoters. Prunus avium Linnaeus (L.), commonly known as sweet cherry has been the centre of attention due to its health benefits, and has been highly studied. In Portugal, most of the cherry production comes from the Fundão region. The Saco is one of the most important cultivar produced in this region, attributed with geographical protection. In this work, we prepared 3 extracts through solid-phase extraction (SPE): a whole extract, fraction I (non-coloured phenolics) and fraction II (coloured phenolics). The three extracts were used to determine the phenolic profile of Saco cultivar by liquid chromatography with diode array detection (LC-DAD) technique. This was followed by the evaluation of their biological potential, testing the extracts’ capacity to scavenge free-radicals (DPPH•, nitric oxide (•NO) and superoxide radical (O2●-)) and to inhibit α-glucosidase enzyme of all extracts. Additionally, we evaluated, for the first time, the protective effects against peroxyl radical (ROO•)-induced hemoglobin oxidation and hemolysis in human erythrocytes. A total of 16 non-coloured phenolics were detected, 3-O-caffeoylquinic and ρ-coumaroylquinic acids were the main ones, and 6 anthocyanins were found, among which cyanidin-3-O-rutinoside represented the majority. In respect to antioxidant activity, Saco showed great antioxidant potential in a concentration-dependent manner, demonstrated through the DPPH•,•NO and O2●-radicals, and greater ability to inhibit the α-glucosidase enzyme in comparison to the regular drug acarbose used to treat diabetes. Additionally, Saco proved to be effective to protect erythrocytes against oxidative damage in a concentration-dependent manner against hemoglobin oxidation and hemolysis. Our work demonstrated that Saco cultivar is an excellent source of phenolic compounds which are natural antioxidants that easily capture reactive species, such as ROO• before they can attack the erythrocytes’ membrane. In a general way, the whole extract showed the best efficiency, most likely due to a synergetic interaction between the different compounds. Finally, comparing the two separate fractions, the coloured fraction showed the most activity in all the assays, proving to be the biggest contributor of Saco cherries’ biological activity.Keywords: biological potential, coloured phenolics, non-coloured phenolics, sweet cherry
Procedia PDF Downloads 256694 Influence of Recycled Concrete Aggregate Content on the Rebar/Concrete Bond Properties through Pull-Out Tests and Acoustic Emission Measurements
Authors: L. Chiriatti, H. Hafid, H. R. Mercado-Mendoza, K. L. Apedo, C. Fond, F. Feugeas
Abstract:
Substituting natural aggregate with recycled aggregate coming from concrete demolition represents a promising alternative to face the issues of both the depletion of natural resources and the congestion of waste storage facilities. However, the crushing process of concrete demolition waste, currently in use to produce recycled concrete aggregate, does not allow the complete separation of natural aggregate from a variable amount of adhered mortar. Given the physicochemical characteristics of the latter, the introduction of recycled concrete aggregate into a concrete mix modifies, to a certain extent, both fresh and hardened concrete properties. As a consequence, the behavior of recycled reinforced concrete members could likely be influenced by the specificities of recycled concrete aggregates. Beyond the mechanical properties of concrete, and as a result of the composite character of reinforced concrete, the bond characteristics at the rebar/concrete interface have to be taken into account in an attempt to describe accurately the mechanical response of recycled reinforced concrete members. Hence, a comparative experimental campaign, including 16 pull-out tests, was carried out. Four concrete mixes with different recycled concrete aggregate content were tested. The main mechanical properties (compressive strength, tensile strength, Young’s modulus) of each concrete mix were measured through standard procedures. A single 14-mm-diameter ribbed rebar, representative of the diameters commonly used in the domain of civil engineering, was embedded into a 200-mm-side concrete cube. The resulting concrete cover is intended to ensure a pull-out type failure (i.e. exceedance of the rebar/concrete interface shear strength). A pull-out test carried out on the 100% recycled concrete specimen was enriched with exploratory acoustic emission measurements. Acoustic event location was performed by means of eight piezoelectric transducers distributed over the whole surface of the specimen. The resulting map was compared to existing data related to natural aggregate concrete. Damage distribution around the reinforcement and main features of the characteristic bond stress/free-end slip curve appeared to be similar to previous results obtained through comparable studies carried out on natural aggregate concrete. This seems to show that the usual bond mechanism sequence (‘chemical adhesion’, mechanical interlocking and friction) remains unchanged despite the addition of recycled concrete aggregate. However, the results also suggest that bond efficiency seems somewhat improved through the use of recycled concrete aggregate. This observation appears to be counter-intuitive with regard to the diminution of the main concrete mechanical properties with the recycled concrete aggregate content. As a consequence, the impact of recycled concrete aggregate content on bond characteristics seemingly represents an important factor which should be taken into account and likely to be further explored in order to determine flexural parameters such as deflection or crack distribution.Keywords: acoustic emission monitoring, high-bond steel rebar, pull-out test, recycled aggregate concrete
Procedia PDF Downloads 171693 Assessment of Five Photoplethysmographic Methods for Estimating Heart Rate Variability
Authors: Akshay B. Pawar, Rohit Y. Parasnis
Abstract:
Heart Rate Variability (HRV) is a widely used indicator of the regulation between the autonomic nervous system (ANS) and the cardiovascular system. Besides being non-invasive, it also has the potential to predict mortality in cases involving critical injuries. The gold standard method for determining HRV is based on the analysis of RR interval time series extracted from ECG signals. However, because it is much more convenient to obtain photoplethysmogramic (PPG) signals as compared to ECG signals (which require the attachment of several electrodes to the body), many researchers have used pulse cycle intervals instead of RR intervals to estimate HRV. They have also compared this method with the gold standard technique. Though most of their observations indicate a strong correlation between the two methods, recent studies show that in healthy subjects, except for a few parameters, the pulse-based method cannot be a surrogate for the standard RR interval- based method. Moreover, the former tends to overestimate short-term variability in heart rate. This calls for improvements in or alternatives to the pulse-cycle interval method. In this study, besides the systolic peak-peak interval method (PP method) that has been studied several times, four recent PPG-based techniques, namely the first derivative peak-peak interval method (P1D method), the second derivative peak-peak interval method (P2D method), the valley-valley interval method (VV method) and the tangent-intersection interval method (TI method) were compared with the gold standard technique. ECG and PPG signals were obtained from 10 young and healthy adults (consisting of both males and females) seated in the armchair position. In order to de-noise these signals and eliminate baseline drift, they were passed through certain digital filters. After filtering, the following HRV parameters were computed from PPG using each of the five methods and also from ECG using the gold standard method: time domain parameters (SDNN, pNN50 and RMSSD), frequency domain parameters (Very low-frequency power (VLF), Low-frequency power (LF), High-frequency power (HF) and Total power or “TP”). Besides, Poincaré plots were also plotted and their SD1/SD2 ratios determined. The resulting sets of parameters were compared with those yielded by the standard method using measures of statistical correlation (correlation coefficient) as well as statistical agreement (Bland-Altman plots). From the viewpoint of correlation, our results show that the best PPG-based methods for the determination of most parameters and Poincaré plots are the P2D method (shows more than 93% correlation with the standard method) and the PP method (mean correlation: 88%) whereas the TI, VV and P1D methods perform poorly (<70% correlation in most cases). However, our evaluation of statistical agreement using Bland-Altman plots shows that none of the five techniques agrees satisfactorily well with the gold standard method as far as time-domain parameters are concerned. In conclusion, excellent statistical correlation implies that certain PPG-based methods provide a good amount of information on the pattern of heart rate variation, whereas poor statistical agreement implies that PPG cannot completely replace ECG in the determination of HRV.Keywords: photoplethysmography, heart rate variability, correlation coefficient, Bland-Altman plot
Procedia PDF Downloads 324692 Evaluation of the Cytotoxicity and Cellular Uptake of a Cyclodextrin-Based Drug Delivery System for Cancer Therapy
Authors: Caroline Mendes, Mary McNamara, Orla Howe
Abstract:
Drug delivery systems are proposed for use in cancer treatment to specifically target cancer cells and deliver a therapeutic dose without affecting normal cells. For that purpose, the use of folate receptors (FR) can be considered a key strategy, since they are commonly over-expressed in cancer cells. In this study, cyclodextrins (CD) have being used as vehicles to target FR and deliver the chemotherapeutic drug, methotrexate (MTX). CDs have the ability to form inclusion complexes, in which molecules of suitable dimensions are included within their cavities. Here, β-CD has been modified using folic acid so as to specifically target the FR. Thus, this drug delivery system consists of β-CD, folic acid and MTX (CDEnFA:MTX). Cellular uptake of folic acid is mediated with high affinity by folate receptors while the cellular uptake of antifolates, such as MTX, is mediated with high affinity by the reduced folate carriers (RFCs). This study addresses the gene (mRNA) and protein expression levels of FRs and RFCs in the cancer cell lines CaCo-2, SKOV-3, HeLa, MCF-7, A549 and the normal cell line BEAS-2B, quantified by real-time polymerase chain reaction (real-time PCR) and flow cytometry, respectively. From that, four cell lines with different levels of FRs, were chosen for cytotoxicity assays of MTX and CDEnFA:MTX using the MTT assay. Real-time PCR and flow cytometry data demonstrated that all cell lines ubiquitously express moderate levels of RFC. These experiments have also shown that levels of FR protein in CaCo-2 cells are high, while levels in SKOV-3, HeLa and MCF-7 cells are moderate. A549 and BEAS-2B cells express low levels of FR protein. FRs are highly expressed in all the cancer cell lines analysed when compared to the normal cell line BEAS-2B. The cell lines CaCo-2, MCF-7, A549 and BEAS-2B were used in the cell viability assays. 48 hours treatment with the free drug and the complex resulted in IC50 values of 93.9 µM ± 15.2 and 56.0 µM ± 4.0 for CaCo-2 for free MTX and CDEnFA:MTX respectively, 118.2 µM ± 16.8 and 97.8 µM ± 12.3 for MCF-7, 36.4 µM ± 6.9 and 75.0 µM ± 10.5 for A549 and 132.6 µM ± 16.1 and 288.1 µM ± 26.3 for BEAS-2B. These results demonstrate that free MTX is more toxic towards cell lines expressing low levels of FR, such as the BEAS-2B. More importantly, these results demonstrate that the inclusion complex CDEnFA:MTX showed greater cytotoxicity than the free drug towards the high FR expressing CaCo-2 cells, indicating that it has potential to target this receptor, enhancing the specificity and the efficiency of the drug. The use of cell imaging by confocal microscopy has allowed visualisation of FR targeting in cancer cells, as well as the identification of the interlisation pathway of the drug. Hence, the cellular uptake and internalisation process of this drug delivery system is being addressed.Keywords: cancer treatment, cyclodextrins, drug delivery, folate receptors, reduced folate carriers
Procedia PDF Downloads 310