Search results for: central management system
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25684

Search results for: central management system

1444 Staphylococcus Aureus Septic Arthritis and Necrotizing Fasciitis in a Patient With Undiagnosed Diabetes Mellitus.

Authors: Pedro Batista, André Vinha, Filipe Castelo, Bárbara Costa, Ricardo Sousa, Raquel Ricardo, André Pinto

Abstract:

Background: Septic arthritis is a diagnosis that must be considered in any patient presenting with acute joint swelling and fever. Among the several risk factors for septic arthritis, such as age, rheumatoid arthritis, recent surgery, or skin infection, diabetes mellitus can sometimes be the main risk factor. Staphylococcus aureus is the most common pathogen isolated in septic arthritis; however, it is uncommon in monomicrobial necrotizing fasciitis. Objectives: A case report of concomitant septic arthritis and necrotizing fasciitis in a patient with undiagnosed diabetes based on clinical history. Study Design & Methods: We report a case of a 58-year-old Portuguese previously healthy man who presented to the emergency department with fever and left knee swelling and pain for two days. The blood work revealed ketonemia of 6.7 mmol/L and glycemia of 496 mg/dL. The vital signs were significant for a temperature of 38.5 ºC and 123 bpm of heart rate. The left knee had edema and inflammatory signs. Computed tomography of the left knee showed diffuse edema of the subcutaneous cellular tissue and soft tissue air bubbles. A diagnosis of septic arthritis and necrotising fasciitis was made. He was taken to the operating room for surgical debridement. The samples collected intraoperatively were sent for microbiological analysis, revealing infection by multi-sensitive Staphylococcus aureus. Given this result, the empiric flucloxacillin (500 mg IV) and clindamycin (1000 mg IV) were maintained for 3 weeks. On the seventh day of hospitalization, there was a significant improvement in subcutaneous and musculoskeletal tissues. After two weeks of hospitalization, there was no purulent content and partial closure of the wounds was possible. After 3 weeks, he was switched to oral antibiotics (flucloxacillin 500 mg). A week later, a urinary infection by Pseudomonas aeruginosa was diagnosed and ciprofloxacin 500 mg was administered for 7 days without complications. After 30 days of hospital admission, the patient was discharged home and recovered. Results: The final diagnosis of concomitant septic arthritis and necrotizing fasciitis was made based on the imaging findings, surgical exploration and microbiological tests results. Conclusions: Early antibiotic administration and surgical debridement are key in the management of septic arthritis and necrotizing fasciitis. Furthermore, risk factors control (euglycemic blood glucose levels) must always be taken into account given the crucial role in the patient's recovery.

Keywords: septic arthritis, Necrotizing fasciitis, diabetes, Staphylococcus Aureus

Procedia PDF Downloads 287
1443 Optimizing Hydrogen Production from Biomass Pyro-Gasification in a Multi-Staged Fluidized Bed Reactor

Authors: Chetna Mohabeer, Luis Reyes, Lokmane Abdelouahed, Bechara Taouk

Abstract:

In the transition to sustainability and the increasing use of renewable energy, hydrogen will play a key role as an energy carrier. Biomass has the potential to accelerate the realization of hydrogen as a major fuel of the future. Pyro-gasification allows the conversion of organic matter mainly into synthesis gas, or “syngas”, majorly constituted by CO, H2, CH4, and CO2. A second, condensable fraction of biomass pyro-gasification products are “tars”. Under certain conditions, tars may decompose into hydrogen and other light hydrocarbons. These conditions include two types of cracking: homogeneous cracking, where tars decompose under the effect of temperature ( > 1000 °C), and heterogeneous cracking, where catalysts such as olivine, dolomite or biochar are used. The latter process favors cracking of tars at temperatures close to pyro-gasification temperatures (~ 850 °C). Pyro-gasification of biomass coupled with water-gas shift is the most widely practiced process route for biomass to hydrogen today. In this work, an innovating solution will be proposed for this conversion route, in that all the pyro-gasification products, not only methane, will undergo processes that aim to optimize hydrogen production. First, a heterogeneous cracking step was included in the reaction scheme, using biochar (remaining solid from the pyro-gasification reaction) as catalyst and CO2 and H2O as gasifying agents. This process was followed by a catalytic steam methane reforming (SMR) step. For this, a Ni-based catalyst was tested under different reaction conditions to optimize H2 yield. Finally, a water-gas shift (WGS) reaction step with a Fe-based catalyst was added to optimize the H2 yield from CO. The reactor used for cracking was a fluidized bed reactor, and the one used for SMR and WGS was a fixed bed reactor. The gaseous products were analyzed continuously using a µ-GC (Fusion PN 074-594-P1F). With biochar as bed material, it was seen that more H2 was obtained with steam as a gasifying agent (32 mol. % vs. 15 mol. % with CO2 at 900 °C). CO and CH4 productions were also higher with steam than with CO2. Steam as gasifying agent and biochar as bed material were hence deemed efficient parameters for the first step. Among all parameters tested, CH4 conversions approaching 100 % were obtained from SMR reactions using Ni/γ-Al2O3 as a catalyst, 800 °C, and a steam/methane ratio of 5. This gave rise to about 45 mol % H2. Experiments about WGS reaction are currently being conducted. At the end of this phase, the four reactions are performed consecutively, and the results analyzed. The final aim is the development of a global kinetic model of the whole system in a multi-stage fluidized bed reactor that can be transferred on ASPEN PlusTM.

Keywords: multi-staged fluidized bed reactor, pyro-gasification, steam methane reforming, water-gas shift

Procedia PDF Downloads 123
1442 An Effort at Improving Reliability of Laboratory Data in Titrimetric Analysis for Zinc Sulphate Tablets Using Validated Spreadsheet Calculators

Authors: M. A. Okezue, K. L. Clase, S. R. Byrn

Abstract:

The requirement for maintaining data integrity in laboratory operations is critical for regulatory compliance. Automation of procedures reduces incidence of human errors. Quality control laboratories located in low-income economies may face some barriers in attempts to automate their processes. Since data from quality control tests on pharmaceutical products are used in making regulatory decisions, it is important that laboratory reports are accurate and reliable. Zinc Sulphate (ZnSO4) tablets is used in treatment of diarrhea in pediatric population, and as an adjunct therapy for COVID-19 regimen. Unfortunately, zinc content in these formulations is determined titrimetrically; a manual analytical procedure. The assay for ZnSO4 tablets involves time-consuming steps that contain mathematical formulae prone to calculation errors. To achieve consistency, save costs, and improve data integrity, validated spreadsheets were developed to simplify the two critical steps in the analysis of ZnSO4 tablets: standardization of 0.1M Sodium Edetate (EDTA) solution, and the complexometric titration assay procedure. The assay method in the United States Pharmacopoeia was used to create a process flow for ZnSO4 tablets. For each step in the process, different formulae were input into two spreadsheets to automate calculations. Further checks were created within the automated system to ensure validity of replicate analysis in titrimetric procedures. Validations were conducted using five data sets of manually computed assay results. The acceptance criteria set for the protocol were met. Significant p-values (p < 0.05, α = 0.05, at 95% Confidence Interval) were obtained from students’ t-test evaluation of the mean values for manual-calculated and spreadsheet results at all levels of the analysis flow. Right-first-time analysis and principles of data integrity were enhanced by use of the validated spreadsheet calculators in titrimetric evaluations of ZnSO4 tablets. Human errors were minimized in calculations when procedures were automated in quality control laboratories. The assay procedure for the formulation was achieved in a time-efficient manner with greater level of accuracy. This project is expected to promote cost savings for laboratory business models.

Keywords: data integrity, spreadsheets, titrimetry, validation, zinc sulphate tablets

Procedia PDF Downloads 154
1441 Tuning the Surface Roughness of Patterned Nanocellulose Films: An Alternative to Plastic Based Substrates for Circuit Priniting in High-Performance Electronics

Authors: Kunal Bhardwaj, Christine Browne

Abstract:

With the increase in global awareness of the environmental impacts of plastic-based products, there has been a massive drive to reduce our use of these products. Use of plastic-based substrates in electronic circuits has been a matter of concern recently. Plastics provide a very smooth and cheap surface for printing high-performance electronics due to their non-permeability to ink and easy mouldability. In this research, we explore the use of nano cellulose (NC) films in electronics as they provide an advantage of being 100% recyclable and eco-friendly. The main hindrance in the mass adoption of NC film as a substitute for plastic is its higher surface roughness which leads to ink penetration, and dispersion in the channels on the film. This research was conducted to tune the RMS roughness of NC films to a range where they can replace plastics in electronics(310-470nm). We studied the dependence of the surface roughness of the NC film on the following tunable aspects: 1) composition by weight of the NC suspension that is sprayed on a silicon wafer 2) the width and the depth of the channels on the silicon wafer used as a base. Various silicon wafers with channel depths ranging from 6 to 18 um and channel widths ranging from 5 to 500um were used as a base. Spray coating method for NC film production was used and two solutions namely, 1.5wt% NC and a 50-50 NC-CNC (cellulose nanocrystal) mixture in distilled water, were sprayed through a Wagner sprayer system model 117 at an angle of 90 degrees. The silicon wafer was kept on a conveyor moving at a velocity of 1.3+-0.1 cm/sec. Once the suspension was uniformly sprayed, the mould was left to dry in an oven at 50°C overnight. The images of the films were taken with the help of an optical profilometer, Olympus OLS 5000. These images were converted into a ‘.lext’ format and analyzed using Gwyddion, a data and image analysis software. Lowest measured RMS roughness of 291nm was with a 50-50 CNC-NC mixture, sprayed on a silicon wafer with a channel width of 5 µm and a channel depth of 12 µm. Surface roughness values of 320+-17nm were achieved at lower (5 to 10 µm) channel widths on a silicon wafer. This research opened the possibility of the usage of 100% recyclable NC films with an additive (50% CNC) in high-performance electronics. Possibility of using additives like Carboxymethyl Cellulose (CMC) is also being explored due to the hypothesis that CMC would reduce friction amongst fibers, which in turn would lead to better conformations amongst the NC fibers. CMC addition would thus be able to help tune the surface roughness of the NC film to an even greater extent in future.

Keywords: nano cellulose films, electronic circuits, nanocrystals and surface roughness

Procedia PDF Downloads 112
1440 Network Analysis to Reveal Microbial Community Dynamics in the Coral Reef Ocean

Authors: Keigo Ide, Toru Maruyama, Michihiro Ito, Hiroyuki Fujimura, Yoshikatu Nakano, Shoichiro Suda, Sachiyo Aburatani, Haruko Takeyama

Abstract:

Understanding environmental system is one of the important tasks. In recent years, conservation of coral environments has been focused for biodiversity issues. The damage of coral reef under environmental impacts has been observed worldwide. However, the casual relationship between damage of coral and environmental impacts has not been clearly understood. On the other hand, structure/diversity of marine bacterial community may be relatively robust under the certain strength of environmental impact. To evaluate the coral environment conditions, it is necessary to investigate relationship between marine bacterial composition in coral reef and environmental factors. In this study, the Time Scale Network Analysis was developed and applied to analyze the marine environmental data for investigating the relationship among coral, bacterial community compositions and environmental factors. Seawater samples were collected fifteen times from November 2014 to May 2016 at two locations, Ishikawabaru and South of Sesoko in Sesoko Island, Okinawa. The physicochemical factors such as temperature, photosynthetic active radiation, dissolved oxygen, turbidity, pH, salinity, chlorophyll, dissolved organic matter and depth were measured at the coral reef area. Metagenome and metatranscriptome in seawater of coral reef were analyzed as the biological factors. Metagenome data was used to clarify marine bacterial community composition. In addition, functional gene composition was estimated from metatranscriptome. For speculating the relationships between physicochemical and biological factors, cross-correlation analysis was applied to time scale data. Even though cross-correlation coefficients usually include the time precedence information, it also included indirect interactions between the variables. To elucidate the direct regulations between both factors, partial correlation coefficients were combined with cross correlation. This analysis was performed against all parameters such as the bacterial composition, the functional gene composition and the physicochemical factors. As the results, time scale network analysis revealed the direct regulation of seawater temperature by photosynthetic active radiation. In addition, concentration of dissolved oxygen regulated the value of chlorophyll. Some reasonable regulatory relationships between environmental factors indicate some part of mechanisms in coral reef area.

Keywords: coral environment, marine microbiology, network analysis, omics data analysis

Procedia PDF Downloads 239
1439 An Adjoint-Based Method to Compute Derivatives with Respect to Bed Boundary Positions in Resistivity Measurements

Authors: Mostafa Shahriari, Theophile Chaumont-Frelet, David Pardo

Abstract:

Resistivity measurements are used to characterize the Earth’s subsurface. They are categorized into two different groups: (a) those acquired on the Earth’s surface, for instance, controlled source electromagnetic (CSEM) and Magnetotellurics (MT), and (b) those recorded with borehole logging instruments such as Logging-While-Drilling (LWD) devices. LWD instruments are mostly used for geo-steering purposes, i.e., to adjust dip and azimuthal angles of a well trajectory to drill along a particular geological target. Modern LWD tools measure all nine components of the magnetic field corresponding to three orthogonal transmitter and receiver orientations. In order to map the Earth’s subsurface and perform geo-steering, we invert measurements using a gradient-based method that utilizes the derivatives of the recorded measurements with respect to the inversion variables. For resistivity measurements, these inversion variables are usually the constant resistivity value of each layer and the bed boundary positions. It is well-known how to compute derivatives with respect to the constant resistivity value of each layer using semi-analytic or numerical methods. However, similar formulas for computing the derivatives with respect to bed boundary positions are unavailable. The main contribution of this work is to provide an adjoint-based formulation for computing derivatives with respect to the bed boundary positions. The key idea to obtain the aforementioned adjoint state formulations for the derivatives is to separate the tangential and normal components of the field and treat them differently. This formulation allows us to compute the derivatives faster and more accurately than with traditional finite differences approximations. In the presentation, we shall first derive a formula for computing the derivatives with respect to the bed boundary positions for the potential equation. Then, we shall extend our formulation to 3D Maxwell’s equations. Finally, by considering a 1D domain and reducing the dimensionality of the problem, which is a common practice in the inversion of resistivity measurements, we shall derive a formulation to compute the derivatives of the measurements with respect to the bed boundary positions using a 1.5D variational formulation. Then, we shall illustrate the accuracy and convergence properties of our formulations by comparing numerical results with the analytical derivatives for the potential equation. For the 1.5D Maxwell’s system, we shall compare our numerical results based on the proposed adjoint-based formulation vs those obtained with a traditional finite difference approach. Numerical results shall show that our proposed adjoint-based technique produces enhanced accuracy solutions while its cost is negligible, as opposed to the finite difference approach that requires the solution of one additional problem per derivative.

Keywords: inverse problem, bed boundary positions, electromagnetism, potential equation

Procedia PDF Downloads 163
1438 Biofilm Text Classifiers Developed Using Natural Language Processing and Unsupervised Learning Approach

Authors: Kanika Gupta, Ashok Kumar

Abstract:

Biofilms are dense, highly hydrated cell clusters that are irreversibly attached to a substratum, to an interface or to each other, and are embedded in a self-produced gelatinous matrix composed of extracellular polymeric substances. Research in biofilm field has become very significant, as biofilm has shown high mechanical resilience and resistance to antibiotic treatment and constituted as a significant problem in both healthcare and other industry related to microorganisms. The massive information both stated and hidden in the biofilm literature are growing exponentially therefore it is not possible for researchers and practitioners to automatically extract and relate information from different written resources. So, the current work proposes and discusses the use of text mining techniques for the extraction of information from biofilm literature corpora containing 34306 documents. It is very difficult and expensive to obtain annotated material for biomedical literature as the literature is unstructured i.e. free-text. Therefore, we considered unsupervised approach, where no annotated training is necessary and using this approach we developed a system that will classify the text on the basis of growth and development, drug effects, radiation effects, classification and physiology of biofilms. For this, a two-step structure was used where the first step is to extract keywords from the biofilm literature using a metathesaurus and standard natural language processing tools like Rapid Miner_v5.3 and the second step is to discover relations between the genes extracted from the whole set of biofilm literature using pubmed.mineR_v1.0.11. We used unsupervised approach, which is the machine learning task of inferring a function to describe hidden structure from 'unlabeled' data, in the above-extracted datasets to develop classifiers using WinPython-64 bit_v3.5.4.0Qt5 and R studio_v0.99.467 packages which will automatically classify the text by using the mentioned sets. The developed classifiers were tested on a large data set of biofilm literature which showed that the unsupervised approach proposed is promising as well as suited for a semi-automatic labeling of the extracted relations. The entire information was stored in the relational database which was hosted locally on the server. The generated biofilm vocabulary and genes relations will be significant for researchers dealing with biofilm research, making their search easy and efficient as the keywords and genes could be directly mapped with the documents used for database development.

Keywords: biofilms literature, classifiers development, text mining, unsupervised learning approach, unstructured data, relational database

Procedia PDF Downloads 151
1437 Antibiotic Susceptibility Pattern of the Pathogens Isolated from Hospital Acquired Acute Bacterial Meningitis in a Tertiary Health Care Centre in North India

Authors: M. S. Raza, A. Kapil, Sonu Tyagi, H. Gautam, S. Mohapatra, R. Chaudhry, S. Sood, V. Goyal, R. Lodha, V. Sreenivas, B. K. Das

Abstract:

Background: Acute bacterial meningitis remains the major cause of mortality and morbidity. More than half of the survivors develop the significant lifelong neurological abnormalities. Diagnosis of the hospital acquired acute bacterial meningitis (HAABM) is challenging as it appears either in the post operative patients or patients acquire the organisms from the hospital environment. In both the situations, pathogens are exposed to high dose of antibiotics. Chances of getting multidrug resistance organism are very high. We have performed this experiment to find out the etiological agents of HAABM and its antibiotics susceptibility pattern. Methodology: A perspective study was conducted at the Department of Microbiology, All India Institute of Medical Sciences, New Delhi. From March 2015 to April 2018 total 400 Cerebro spinal fluid samples were collected aseptically. Samples were processed for cell count, Gram staining, and culture. Culture plates were incubated at 37°C for 18-24 hours. Organism grown on blood and MacConkey agar were identified by MALDI-TOF Vitek MS (BioMerieux, France) and antibiotic susceptibility tests were performed by Kirby Bauer disc diffusion method as per CLSI 2015 guideline. Results: Of the 400 CSF samples processed, 43 (10.75%) were culture positive for different bacteria. Out of 43 isolates, the most prevalent Gram-positive organisms were S. aureus 4 (9.30%) followed by E. faecium 3 (6.97%) & CONS 2 (4.65%). Similarly, E. coli 13 (30.23%) was the commonest Gram-negative isolates followed by A. baumannii 12 (27.90%), K. pneumonia 5 (11.62%) and P. aeruginosa 4(9.30%). Most of the antibiotics tested against the Gram-negative isolates were resistance to them. Colistin was most effective followed by Meropenem and Imepenim for all Gram-negative HAABM isolates. Similarly, most of antibiotics tested were susceptible to S. aureus and CONS. However, E. faecium (100%) were only susceptible to vancomycin and teicoplanin. Conclusion: Hospital acquired acute bacterial meningitis (HAABM) is becoming the emerging challenge as most of isolates are showing resistance to commonly used antibiotics. Gram-negative organisms are emerging as the major player of HAABM. Great care needs to be taken especially in tertiary care hospitals. Similarly, antibiotic stewardship should be followed and antibiotic susceptibility test (AST) should be performed regularly to update the antibiotic patter and to prevent from the emergence of resistance. Updated information of the AST will be helpful for the better management of the meningitis patient.

Keywords: CSF, MALDI-TOF, hospital acquired acute bacterial meningitis, AST

Procedia PDF Downloads 144
1436 Microglia Activation in Animal Model of Schizophrenia

Authors: Esshili Awatef, Manitz Marie-Pierre, Eßlinger Manuela, Gerhardt Alexandra, Plümper Jennifer, Wachholz Simone, Friebe Astrid, Juckel Georg

Abstract:

Maternal immune activation (MIA) resulting from maternal viral infection during pregnancy is a known risk factor for schizophrenia. The neural mechanisms by which maternal infections increase the risk for schizophrenia remain unknown, although the prevailing hypothesis argues that an activation of the maternal immune system induces changes in the maternal-fetal environment that might interact with fetal brain development. It may lead to an activation of fetal microglia inducing long-lasting functional changes of these cells. Based on post-mortem analysis showing an increased number of activated microglial cells in patients with schizophrenia, it can be hypothesized that these cells contribute to disease pathogenesis and may actively be involved in gray matter loss observed in such patients. In the present study, we hypothesize that prenatal treatment with the inflammatory agent Poly(I:C) during embryogenesis at contributes to microglial activation in the offspring, which may, therefore, represent a contributing factor to the pathogenesis of schizophrenia and underlines the need for new pharmacological treatment options. Pregnant rats were treated with intraperitoneal injections a single dose of Poly(I:C) or saline on gestation day 17. Brains of control and Poly(I:C) offspring, were removed and into 20-μm-thick coronal sections were cut by using a Cryostat. Brain slices were fixed and immunostained with ba1 antibody. Subsequently, Iba1-immunoreactivity was detected using a secondary antibody, goat anti-rabbit. The sections were viewed and photographed under microscope. The immunohistochemical analysis revealed increases in microglia cell number in the prefrontal cortex, in offspring of poly(I:C) treated-rats as compared to the controls injected with NaCl. However, no significant differences were observed in microglia activation in the cerebellum among the groups. Prenatal immune challenge with Poly(I:C) was able to induce long-lasting changes in the offspring brains. This lead to a higher activation of microglia cells in the prefrontal cortex, a brain region critical for many higher brain functions, including working memory and cognitive flexibility. which might be implicated in possible changes in cortical neuropil architecture in schizophrenia. Further studies will be needed to clarify the association between microglial cells activation and schizophrenia-related behavioral alterations.

Keywords: Microglia, neuroinflammation, PolyI:C, schizophrenia

Procedia PDF Downloads 403
1435 Analysis of Splicing Methods for High Speed Automated Fibre Placement Applications

Authors: Phillip Kearney, Constantina Lekakou, Stephen Belcher, Alessandro Sordon

Abstract:

The focus in the automotive industry is to reduce human operator and machine interaction, so manufacturing becomes more automated and safer. The aim is to lower part cost and construction time as well as defects in the parts, sometimes occurring due to the physical limitations of human operators. A move to automate the layup of reinforcement material in composites manufacturing has resulted in the use of tapes that are placed in position by a robotic deposition head, also described as Automated Fibre Placement (AFP). The process of AFP is limited with respect to the finite amount of material that can be loaded into the machine at any one time. Joining two batches of tape material together involves a splice to secure the ends of the finishing tape to the starting edge of the new tape. The splicing method of choice for the majority of prepreg applications is a hand stich method, and as the name suggests requires human input to achieve. This investigation explores three methods for automated splicing, namely, adhesive, binding and stitching. The adhesive technique uses an additional adhesive placed on the tape ends to be joined. Binding uses the binding agent that is already impregnated onto the tape through the application of heat. The stitching method is used as a baseline to compare the new splicing methods to the traditional technique currently in use. As the methods will be used within a High Speed Automated Fibre Placement (HSAFP) process, this meant the parameters of the splices have to meet certain specifications: (a) the splice must be able to endure a load of 50 N in tension applied at a rate of 1 mm/s; (b) the splice must be created in less than 6 seconds, dictated by the capacity of the tape accumulator within the system. The samples for experimentation were manufactured with controlled overlaps, alignment and splicing parameters, these were then tested in tension using a tensile testing machine. Initial analysis explored the use of the impregnated binding agent present on the tape, as in the binding splicing technique. It analysed the effect of temperature and overlap on the strength of the splice. It was found that the optimum splicing temperature was at the higher end of the activation range of the binding agent, 100 °C. The optimum overlap was found to be 25 mm; it was found that there was no improvement in bond strength from 25 mm to 30 mm overlap. The final analysis compared the different splicing methods to the baseline of a stitched bond. It was found that the addition of an adhesive was the best splicing method, achieving a maximum load of over 500 N compared to the 26 N load achieved by a stitching splice and 94 N by the binding method.

Keywords: analysis, automated fibre placement, high speed, splicing

Procedia PDF Downloads 135
1434 ‘Transnationalism and the Temporality of Naturalized Citizenship

Authors: Edward Shizha

Abstract:

Citizenship is not only political, but it is also a socio-cultural expectation that naturalized immigrants desire for. However, the outcomes of citizenship desirability are determined by forces outside the individual’s control based on legislation and laws that are designed at the macro and exosystemic levels by politicians and policy makers. These laws are then applied to determine the status (permanency or temporariness) of citizenship for immigrants and refugees, but the same laws do not apply to non-immigrant citizens who attain it by birth. While theoretically, citizenship has generally been considered an irrevocable legal status and the highest and most secure legal status one can hold in a state, it is not inviolate for immigrants. While Article 8 of the United Nations Convention on the Reduction of Statelessness provides grounds for revocation of citizenship obtained by immigrants and refugees in host countries, nation-states have their own laws tied to the convention that provide grounds for revocation. Ever since the 9/11 attacks in the USA, there has been a rise in conditional citizenship and the state’s withdrawal of citizenship through revocation laws that denaturalize citizens who end up not merely losing their citizenship but also the right to reside in the country of immigration. Because immigrants can be perceived as a security threat, the securitization of citizenship and the legislative changes have been adopted to specifically allow greater discretionary power in stripping people of their citizenship.The paper ‘Do We Really Belong Here?’ Transnationalism and the Temporality of Naturalized Citizenship examines literature on the temporality of naturalized citizenship and questions whether citizenship, for newcomers (immigrants and refugees), is a protected human right or a privilege. The paper argues that citizenship in a host country is a well sought-after status by newcomers. The question is whether their citizenship, if granted, has a permanent or temporary status and whether it is treated in the same way as that of non-immigrant citizens. The paper further argues that, despite citizenship having generally been considered an irrevocable status in most Western countries, in practice, if not in law, for immigrants and refugees, citizenship comes with strings attached because of policies and laws that control naturalized citizenship. These laws can be used to denationalize naturalized citizens through revocations for those stigmatized as ‘undesirables’ who are threatened with deportation. Whereas non-immigrant citizens (those who attain it by birth) have absolute right to their citizenship, this is seldom the case for immigrants.This paper takes a multidisciplinary approach using Urie Bronfenbrenner’s ecological systems theory, the macrosystem and exo-system, to examine and review literature on the temporality of naturalized citizenship and questions whether citizenship is a protected right or a privilege for immigrants. The paper challenges the human rights violation of citizenship revocation and argues for equality of treatment for all citizens despite how they acquired their citizenship. The fragility of naturalized citizenship undermines the basic rights and securities that citizenship status can provide to the person as an inclusive practice in a diverse society.

Keywords: citizenship, citizenship revocation, dual citizenship, human rights, naturalization, naturalized citizenship

Procedia PDF Downloads 52
1433 Changes in Cognition of Elderly People: A Longitudinal Study in Kanchanaburi Province, Thailand

Authors: Natchaphon Auampradit, Patama Vapattanawong, Sureeporn Punpuing, Malee Sunpuwan, Tawanchai Jirapramukpitak

Abstract:

Longitudinal studies related to cognitive impairment in elderly are necessary for health promotion and development. The purposes of this study were (1) to examine changes in cognition of elderly over time and (2) to examine the impacts of changes in social determinants of health (SDH) toward changes in cognition of elderly by using the secondary data derived from the Kanchanaburi Demographic Surveillance System (KDSS) by the Institute for Population and Social Research (IPSR) which contained longitudinal data on individuals, households, and villages. Two selected projects included the Health and Social Support for Elderly in KDSS in 2007 and the Population, Economic, Social, Cultural, and Long-term Care Surveillance for Thai Elderly People’s Health Promotion in 2011. The samples were 586 elderly participated in both projects. SDH included living arrangement, social relationships with children, relatives, and friends, household asset-based wealth index, household monthly income, loans for livings, loans for investment, and working status. Cognitive impairment was measured by category fluency and delayed recall. This study employed Generalized Estimating Equation (GEE) model to investigate changes in cognition by taking SDH and other variables such as age, gender, marital status, education, and depression into the model. The unstructured correlation structure was selected to use for analysis. The results revealed that 24 percent of elderly had cognitive impairment at baseline. About 13 percent of elderly still had cognitive impairment during 2007 until 2011. About 21 percent and 11 percent of elderly had cognitive decline and cognitive improvement, respectively. The cross-sectional analysis showed that household asset-based wealth index, social relationship with friends, working status, age, marital status, education, and depression were significantly associated with cognitive impairment. The GEE model revealed longitudinal effects of household asset-based wealth index and working status against cognition during 2007 until 2011. There was no longitudinal effect of social conditions against cognition. Elderly living with richer household asset-based wealth index, still being employed, and being younger were less likely to have cognitive impairment. The results strongly suggested that poorer household asset-based wealth index and being unemployed were served as a risk factor for cognitive impairment over time. Increasing age was still the major risk for cognitive impairment as well.

Keywords: changes in cognition, cognitive impairment, elderly, KDSS, longitudinal study

Procedia PDF Downloads 124
1432 Ensuring Sustainable Urban Mobility in Indian Cities: Need for Creating People Friendly Roadside Public Spaces

Authors: Pushplata Garg

Abstract:

Mobility, is an integral part of living and sustainability of urban mobility, is essential not only for, but also for addressing global warming and climate change. However, very little is understood about the obstacles/hurdles and likely challenges in the success of plans for sustainable urban mobility in Indian cities from the public perspective. Whereas some of the problems and issues are common to all cities, others vary considerably with financial status, function, the size of cities and culture of a place. Problems and issues similar in all cities relate to availability, efficiency and safety of public transport, last mile connectivity, universal accessibility, and essential planning and design requirements of pedestrians and cyclists are same. However, certain aspects like the type of means of public transportation, priority for cycling and walking, type of roadside activities, are influenced by the size of the town, average educational and income level of public, financial status of the local authorities, and culture of a place. The extent of public awareness, civic sense, maintenance of public spaces and law enforcement vary significantly from large metropolitan cities to small and medium towns in countries like India. Besides, design requirements for shading, location of public open spaces and sitting areas, street furniture, landscaping also vary depending on the climate of the place. Last mile connectivity plays a major role in success/ effectiveness of public transport system in a city. In addition to the provision of pedestrian footpaths connecting important destinations, sitting spaces and necessary amenities/facilities along footpaths; pedestrian movement to public transit stations is encouraged by the presence of quality roadside public spaces. It is not only the visual attractiveness of streetscape or landscape or the public open spaces along pedestrian movement channels but the activities along that make a street vibrant and attractive. These along with adequate spaces to rest and relax encourage people to walk as is observed in cities with successful public transportation systems. The paper discusses problems and issues of pedestrians for last mile connectivity in the context of Delhi, Chandigarh, Gurgaon, and Roorkee- four Indian cities representing varying urban contexts, that is, of metropolitan, large and small cities.

Keywords: pedestrianisation, roadside public spaces, last mile connectivity, sustainable urban mobility

Procedia PDF Downloads 233
1431 Social Business Evaluation in Brazil: Analysis of Entrepreneurship and Investor Practices

Authors: Erica Siqueira, Adriana Bin, Rachel Stefanuto

Abstract:

The paper aims to identify and to discuss the impact and results of ex-ante, mid-term and ex-post evaluation initiatives in Brazilian Social Enterprises from the point of view of the entrepreneurs and investors, highlighting the processes involved in these activities and their aftereffects. The study was conducted using a descriptive methodology, primarily qualitative. A multiple-case study was used, and, for that, semi-structured interviews were conducted with ten entrepreneurs in the (i) social finance, (ii) education, (iii) health, (iv) citizenship and (v) green tech fields, as well as three representatives of various impact investments, which are (i) venture capital, (ii) loan and (iii) equity interest areas. Convenience (non-probabilistic) sampling was adopted to select both businesses and investors, who voluntarily contributed to the research. The evaluation is still incipient in most of the studied business cases. Some stand out by adopting well-known methodologies like Global Impact Investing Report System (GIIRS), but still, have a lot to improve in several aspects. Most of these enterprises use nonexperimental research conducted by their own employees, which is ordinarily not understood as 'golden standard' to some authors in the area. Nevertheless, from the entrepreneur point of view, it is possible to identify that most of them including those routines in some extent in their day-by-day activities, despite the difficulty they have of the business in general. In turn, the investors do not have overall directions to establish evaluation initiatives in respective enterprises; they are funding. There is a mechanism of trust, and this is, usually, enough to prove the impact for all stakeholders. The work concludes that there is a large gap between what the literature states in regard to what should be the best practices in these businesses and what the enterprises really do. The evaluation initiatives must be included in some extension in all enterprises in order to confirm social impact that they realize. Here it is recommended the development and adoption of more flexible evaluation mechanisms that consider the complexity involved in these businesses’ routines. The reflections of the research also suggest important implications for the field of Social Enterprises, whose practices are far from what the theory preaches. It highlights the risk of the legitimacy of these enterprises that identify themselves as 'social impact', sometimes without the proper proof based on causality data. Consequently, this makes the field of social entrepreneurship fragile and susceptible to questioning, weakening the ecosystem as a whole. In this way, the top priorities of these enterprises must be handled together with the results and impact measurement activities. Likewise, it is recommended to perform further investigations that consider the trade-offs between impact versus profit. In addition, research about gender, the entrepreneur motivation to call themselves as Social Enterprises, and the possible unintended consequences from these businesses also should be investigated.

Keywords: evaluation practices, impact, results, social enterprise, social entrepreneurship ecosystem

Procedia PDF Downloads 103
1430 Wildland Fire in Terai Arc Landscape of Lesser Himalayas Threatning the Tiger Habitat

Authors: Amit Kumar Verma

Abstract:

The present study deals with fire prediction model in Terai Arc Landscape, one of the most dramatic ecosystems in Asia where large, wide-ranging species such as tiger, rhinos, and elephant will thrive while bringing economic benefits to the local people. Forest fires cause huge economic and ecological losses and release considerable quantities of carbon into the air and is an important factor inflating the global burden of carbon emissions. Forest fire is an important factor of behavioral cum ecological habit of tiger in wild. Post fire changes i.e. micro and macro habitat directly affect the tiger habitat or land. Vulnerability of fire depicts the changes in microhabitat (humus, soil profile, litter, vegetation, grassland ecosystem). Microorganism like spider, annelids, arthropods and other favorable microorganism directly affect by the forest fire and indirectly these entire microorganisms are responsible for the development of tiger (Panthera tigris) habitat. On the other hand, fire brings depletion in prey species and negative movement of tiger from wild to human- dominated areas, which may leads the conflict i.e. dangerous for both tiger & human beings. Early forest fire prediction through mapping the risk zones can help minimize the fire frequency and manage forest fires thereby minimizing losses. Satellite data plays a vital role in identifying and mapping forest fire and recording the frequency with which different vegetation types are affected. Thematic hazard maps have been generated by using IDW technique. A prediction model for fire occurrence is developed for TAL. The fire occurrence records were collected from state forest department from 2000 to 2014. Disciminant function models was used for developing a prediction model for forest fires in TAL, random points for non-occurrence of fire have been generated. Based on the attributes of points of occurrence and non-occurrence, the model developed predicts the fire occurrence. The map of predicted probabilities classified the study area into five classes very high (12.94%), high (23.63%), moderate (25.87%), low(27.46%) and no fire (10.1%) based upon the intensity of hazard. model is able to classify 78.73 percent of points correctly and hence can be used for the purpose with confidence. Overall, also the model works correctly with almost 69% of points. This study exemplifies the usefulness of prediction model of forest fire and offers a more effective way for management of forest fire. Overall, this study depicts the model for conservation of tiger’s natural habitat and forest conservation which is beneficial for the wild and human beings for future prospective.

Keywords: fire prediction model, forest fire hazard, GIS, landsat, MODIS, TAL

Procedia PDF Downloads 339
1429 A Mixed Method Investigation of the Impact of Practicum Experience on Mathematics Female Pre-Service Teachers’ Sense of Preparedness

Authors: Fatimah Alsaleh, Glenda Anthony

Abstract:

The practicum experience is a critical component of any initial teacher education (ITE) course. As well as providing a near authentic setting for pre-service teachers (PSTs) to practice in, it also plays a key role in shaping their perceptions and sense of preparedness. Nevertheless, merely including a practicum period as a compulsory part of ITE may not in itself be enough to induce feelings of preparedness and efficacy; the quality of the classroom experience must also be considered. Drawing on findings of a larger study of secondary and intermediate level mathematics PSTs’ sense of preparedness to teach, this paper examines the influence of the practicum experience in particular. The study sample comprised female mathematics PSTs who had almost completed their teaching methods course in their fourth year of ITE across 16 teacher education programs in Saudi Arabia. The impact of the practicum experience on PSTs’ sense of preparedness was investigated via a mixed-methods approach combining a survey (N = 105) and in-depth interviews with survey volunteers (N = 16). Statistical analysis in SPSS was used to explore the quantitative data, and thematic analysis was applied to the qualitative interviews data. The results revealed that the PSTs perceived the practicum experience to have played a dominant role in shaping their feelings of preparedness and efficacy. However, despite the generally positive influence of practicum, the PSTs also reported numerous challenges that lessened their feelings of preparedness. These challenges were often related to the classroom environment and the school culture. For example, about half of the PSTs indicated that the practicum schools did not have the resources available or the support necessary to help them learn the work of teaching. In particular, the PSTs expressed concerns about translating the theoretical knowledge learned at the university into practice in authentic classrooms. These challenges engendered PSTs feeling less prepared and suggest that more support from both the university and the school is needed to help PSTs develop a stronger sense of preparedness. The area in which PSTs felt least prepared was that of classroom and behavior management, although the results also indicated that PSTs only felt a moderate level of general teaching efficacy and were less confident about how to support students as learners. Again, feelings of lower efficacy were related to the dissonance between the theory presented at university and real-world classroom practice. In order to close this gap between theory and practice, PSTs expressed the wish to have more time in the practicum, and more accountability for support from school-based mentors. In highlighting the challenges of the practicum in shaping PSTs’ sense of preparedness and efficacy, the study argues that better communication between the ITE providers and the practicum schools is necessary in order to maximize the benefit of the practicum experience.

Keywords: impact, mathematics, practicum experience, pre-service teachers, sense of preparedness

Procedia PDF Downloads 106
1428 Intelligent Control of Agricultural Farms, Gardens, Greenhouses, Livestock

Authors: Vahid Bairami Rad

Abstract:

The intelligentization of agricultural fields can control the temperature, humidity, and variables affecting the growth of agricultural products online and on a mobile phone or computer. Smarting agricultural fields and gardens is one of the best and best ways to optimize agricultural equipment and has a 100 percent direct effect on the growth of plants and agricultural products and farms. Smart farms are the topic that we are going to discuss today, the Internet of Things and artificial intelligence. Agriculture is becoming smarter every day. From large industrial operations to individuals growing organic produce locally, technology is at the forefront of reducing costs, improving results and ensuring optimal delivery to market. A key element to having a smart agriculture is the use of useful data. Modern farmers have more tools to collect intelligent data than in previous years. Data related to soil chemistry also allows people to make informed decisions about fertilizing farmland. Moisture meter sensors and accurate irrigation controllers have made the irrigation processes to be optimized and at the same time reduce the cost of water consumption. Drones can apply pesticides precisely on the desired point. Automated harvesting machines navigate crop fields based on position and capacity sensors. The list goes on. Almost any process related to agriculture can use sensors that collect data to optimize existing processes and make informed decisions. The Internet of Things (IoT) is at the center of this great transformation. Internet of Things hardware has grown and developed rapidly to provide low-cost sensors for people's needs. These sensors are embedded in IoT devices with a battery and can be evaluated over the years and have access to a low-power and cost-effective mobile network. IoT device management platforms have also evolved rapidly and can now be used securely and manage existing devices at scale. IoT cloud services also provide a set of application enablement services that can be easily used by developers and allow them to build application business logic. Focus on yourself. These development processes have created powerful and new applications in the field of Internet of Things, and these programs can be used in various industries such as agriculture and building smart farms. But the question is, what makes today's farms truly smart farms? Let us put this question in another way. When will the technologies associated with smart farms reach the point where the range of intelligence they provide can exceed the intelligence of experienced and professional farmers?

Keywords: food security, IoT automation, wireless communication, hybrid lifestyle, arduino Uno

Procedia PDF Downloads 35
1427 Effect of Laser Ablation OTR Films and High Concentration Carbon Dioxide for Maintaining the Freshness of Strawberry ‘Maehyang’ for Export in Modified Atmosphere Condition

Authors: Hyuk Sung Yoon, In-Lee Choi, Min Jae Jeong, Jun Pill Baek, Ho-Min Kang

Abstract:

This study was conducted to improve storability by using suitable laser ablation oxygen transmission rate (OTR) films and effectiveness of high carbon dioxide at strawberry 'Maehyang' for export. Strawberries were grown by hydroponic system in Gyeongsangnam-do province. These strawberries were packed by different laser ablation OTR films (Daeryung Co., Ltd.) such as 1,300 cc, 20,000 cc, 40,000 cc, 80,000 cc, and 100,000 cc•m-2•day•atm. And CO2 injection (30%) treatment was used 20,000 cc•m-2•day•atm OTR film and perforated film was as a control. Temperature conditions were applied simulated shipping and distribution conditions from Korea to Singapore, there were stored at 3 ℃ (13 days), 10 ℃ (an hour), and 8 ℃ (7 days) for 20 days. Fresh weight loss rate was under 1% as maximum permissible weight loss in treated OTR films except perforated film as a control during storage. Carbon dioxide concentration within a package for the storage period showed a lower value than the maximum CO2 concentration tolerated range (15 %) in treated OTR films and even the concentration of high OTR film treatment; from 20,000cc to 100,000cc were less than 3%. 1,300 cc had a suitable carbon dioxide range as over 5 % under 15 % at 5 days after storage until finished experiments and CO2 injection treatment was quickly drop the 15 % at storage after 1 day, but it kept around 15 % during storage. Oxygen concentration was maintained between 10 to 15 % in 1,300 cc and CO2 injection treatments, but other treatments were kept in 19 to 21 %. Ethylene concentration was showed very higher concentration at the CO2 injection treatment than OTR treatments. In the OTR treatments, 1,300 cc showed the highest concentration in ethylene and 20,000 cc film had lowest. Firmness was maintained highest in 1,300cc, but there was not shown any significant differences among other OTR treatments. Visual quality had shown the best result in 20,000 cc that showed marketable quality until 20 days after storage. 20,000 cc and perforated film had better than other treatments in off-odor and the 1,300 cc and CO2 injection treatments have occurred strong off-odor even after 10 minutes. As a result of the difference between Hunter ‘L’ and ‘a’ values of chroma meter, the 1,300cc and CO2 injection treatments were delayed color developments and other treatments did not shown any significant differences. The results indicate that effectiveness for maintaining the freshness was best achieved at 20,000 cc•m-2•day•atm. Although 1,300 cc and CO2 injection treatments were in appropriate MA condition, it showed darkening of strawberry calyx and excessive reduction of coloring due to high carbon dioxide concentration during storage. While 1,300cc and CO2 injection treatments were considered as appropriate treatments for exports to Singapore, but the result was shown different. These results are based on cultivar characteristics of strawberry 'Maehyang'.

Keywords: carbon dioxide, firmness, shelf-life, visual quality

Procedia PDF Downloads 384
1426 Interlayer-Mechanical Working: Effective Strategy to Mitigate Solidification Cracking in Wire-Arc Additive Manufacturing (WAAM) of Fe-based Shape Memory Alloy

Authors: Soumyajit Koley, Kuladeep Rajamudili, Supriyo Ganguly

Abstract:

In recent years, iron-based shape-memory alloys have been emerging as an inexpensive alternative to costly Ni-Ti alloy and thus considered suitable for many different applications in civil structures. Fe-17Mn-10Cr-5Si-4Ni-0.5V-0.5C alloy contains 37 wt.% of total solute elements. Such complex multi-component metallurgical system often leads to severe solute segregation and solidification cracking. Wire-arc additive manufacturing (WAAM) of Fe-17Mn-10Cr-5Si-4Ni-0.5V-0.5C alloy was attempted using a cold-wire fed plasma arc torch attached to a 6-axis robot. Self-standing walls were manufactured. However, multiple vertical cracks were observed after deposition of around 15 layers. Microstructural characterization revealed open surfaces of dendrites inside the crack, confirming these cracks as solidification cracks. Machine hammer peening (MHP) process was adopted on each layer to cold work the newly deposited alloy. Effect of MHP traverse speed were varied systematically to attain a window of operation where cracking was completely stopped. Microstructural and textural analysis were carried out further to correlate the peening process to microstructure.MHP helped in many ways. Firstly, a compressive residual stress was induced on each layer which countered the tensile residual stress evolved from solidification process; thus, reducing net tensile stress on the wall along its length. Secondly, significant local plastic deformation from MHP followed by the thermal cycle induced by deposition of next layer resulted into a recovered and recrystallized equiaxed microstructure instead of long columnar grains along the vertical direction. This microstructural change increased the total crack propagation length and thus, the overall toughness. Thirdly, the inter-layer peening significantly reduced the strong cubic {001} crystallographic texture formed along the build direction. Cubic {001} texture promotes easy separation of planes and easy crack propagation. Thus reduction of cubic texture alleviates the chance of cracking.

Keywords: Iron-based shape-memory alloy, wire-arc additive manufacturing, solidification cracking, inter-layer cold working, machine hammer peening

Procedia PDF Downloads 57
1425 Wave Powered Airlift PUMP for Primarily Artificial Upwelling

Authors: Bruno Cossu, Elio Carlo

Abstract:

The invention (patent pending) relates to the field of devices aimed to harness wave energy (WEC) especially for artificial upwelling, forced downwelling, production of compressed air. In its basic form, the pump consists of a hydro-pneumatic machine, driven by wave energy, characterised by the fact that it has no moving mechanical parts, and is made up of only two structural components: an hollow body, which is open at the bottom to the sea and partially immersed in sea water, and a tube, both joined together to form a single body. The shape of the hollow body is like a mushroom whose cap and stem are hollow; the stem is open at both ends and the lower part of its surface is crossed by holes; the tube is external and coaxial to the stem and is joined to it so as to form a single body. This shape of the hollow body and the type of connection to the tube allows the pump to operate simultaneously as an air compressor (OWC) on the cap side, and as an airlift on the stem side. The pump can be implemented in four versions, each of which provides different variants and methods of implementation: 1) firstly, for the artificial upwelling of cold, deep ocean water; 2) secondly, for the lifting and transfer of these waters to the place of use (above all, fish farming plants), even if kilometres away; 3) thirdly, for the forced downwelling of surface sea water; 4) fourthly, for the forced downwelling of surface water, its oxygenation, and the simultaneous production of compressed air. The transfer of the deep water or the downwelling of the raised surface water (as for pump versions indicated in points 2 and 3 above), is obtained by making the water raised by the airlift flow into the upper inlet of another pipe, internal or adjoined to the airlift; the downwelling of raised surface water, oxygenation, and the simultaneous production of compressed air (as for the pump version indicated in point 4), is obtained by installing a venturi tube on the upper end of the pipe, whose restricted section is connected to the external atmosphere, so that it also operates like a hydraulic air compressor (trompe). Furthermore, by combining one or more pumps for the upwelling of cold, deep water, with one or more pumps for the downwelling of the warm surface water, the system can be used in an Ocean Thermal Energy Conversion plant to supply the cold and the warm water required for the operation of the same, thus allowing to use, without increased costs, in addition to the mechanical energy of the waves, for the purposes indicated in points 1 to 4, the thermal one of the marine water treated in the process.

Keywords: air lifted upwelling, fish farming plant, hydraulic air compressor, wave energy converter

Procedia PDF Downloads 131
1424 Progressive Damage Analysis of Mechanically Connected Composites

Authors: Şeyma Saliha Fidan, Ozgur Serin, Ata Mugan

Abstract:

While performing verification analyses under static and dynamic loads that composite structures used in aviation are exposed to, it is necessary to obtain the bearing strength limit value for mechanically connected composite structures. For this purpose, various tests are carried out in accordance with aviation standards. There are many companies in the world that perform these tests in accordance with aviation standards, but the test costs are very high. In addition, due to the necessity of producing coupons, the high cost of coupon materials, and the long test times, it is necessary to simulate these tests on the computer. For this purpose, various test coupons were produced by using reinforcement and alignment angles of the composite radomes, which were integrated into the aircraft. Glass fiber reinforced and Quartz prepreg is used in the production of the coupons. The simulations of the tests performed according to the American Society for Testing and Materials (ASTM) D5961 Procedure C standard were performed on the computer. The analysis model was created in three dimensions for the purpose of modeling the bolt-hole contact surface realistically and obtaining the exact bearing strength value. The finite element model was carried out with the Analysis System (ANSYS). Since a physical break cannot be made in the analysis studies carried out in the virtual environment, a hypothetical break is realized by reducing the material properties. The material properties reduction coefficient was determined as 10%, which is stated to give the most realistic approach in the literature. There are various theories in this method, which is called progressive failure analysis. Because the hashin theory does not match our experimental results, the puck progressive damage method was used in all coupon analyses. When the experimental and numerical results are compared, the initial damage and the resulting force drop points, the maximum damage load values ​​, and the bearing strength value are very close. Furthermore, low error rates and similar damage patterns were obtained in both test and simulation models. In addition, the effects of various parameters such as pre-stress, use of bushing, the ratio of the distance between the bolt hole center and the plate edge to the hole diameter (E/D), the ratio of plate width to hole diameter (W/D), hot-wet environment conditions were investigated on the bearing strength of the composite structure.

Keywords: puck, finite element, bolted joint, composite

Procedia PDF Downloads 82
1423 Flow Field Optimization for Proton Exchange Membrane Fuel Cells

Authors: Xiao-Dong Wang, Wei-Mon Yan

Abstract:

The flow field design in the bipolar plates affects the performance of the proton exchange membrane (PEM) fuel cell. This work adopted a combined optimization procedure, including a simplified conjugate-gradient method and a completely three-dimensional, two-phase, non-isothermal fuel cell model, to look for optimal flow field design for a single serpentine fuel cell of size 9×9 mm with five channels. For the direct solution, the two-fluid method was adopted to incorporate the heat effects using energy equations for entire cells. The model assumes that the system is steady; the inlet reactants are ideal gases; the flow is laminar; and the porous layers such as the diffusion layer, catalyst layer and PEM are isotropic. The model includes continuity, momentum and species equations for gaseous species, liquid water transport equations in the channels, gas diffusion layers, and catalyst layers, water transport equation in the membrane, electron and proton transport equations. The Bulter-Volumer equation was used to describe electrochemical reactions in the catalyst layers. The cell output power density Pcell is maximized subjected to an optimal set of channel heights, H1-H5, and channel widths, W2-W5. The basic case with all channel heights and widths set at 1 mm yields a Pcell=7260 Wm-2. The optimal design displays a tapered characteristic for channels 1, 3 and 4, and a diverging characteristic in height for channels 2 and 5, producing a Pcell=8894 Wm-2, about 22.5% increment. The reduced channel heights of channels 2-4 significantly increase the sub-rib convection and widths for effectively removing liquid water and oxygen transport in gas diffusion layer. The final diverging channel minimizes the leakage of fuel to outlet via sub-rib convection from channel 4 to channel 5. Near-optimal design without huge loss in cell performance but is easily manufactured is tested. The use of a straight, final channel of 0.1 mm height has led to 7.37% power loss, while the design with all channel widths to be 1 mm with optimal channel heights obtained above yields only 1.68% loss of current density. The presence of a final, diverging channel has greater impact on cell performance than the fine adjustment of channel width at the simulation conditions set herein studied.

Keywords: optimization, flow field design, simplified conjugate-gradient method, serpentine flow field, sub-rib convection

Procedia PDF Downloads 283
1422 Home-Based Care with Follow-Up at Outpatient Unit or Community-Follow-Up Center with/without Food Supplementation and/or Psychosocial Stimulation of Children with Moderate Acute Malnutrition in Bangladesh

Authors: Md Iqbal Hossain, Tahmeed Ahmed, Kenneth H. Brown

Abstract:

Objective: To assess the effect of community-based follow up, with or without food-supplementation and/or psychosocial stimulation, as an alternative to current hospital-based follow-up of children with moderate-acute-malnutrition (WHZ < -2 to -3) (MAM). Design/methods: The study was conducted at the ICDDR,B Dhaka Hospital and in four urban primary health care centers of Dhaka, Bangladesh during 2005-2007. The efficacy of five different randomly assigned interventions was compared with respect to the rate of completion of follow-up, growth and morbidity in 227 MAM children aged 6-24 months who were initially treated at ICDDR,B for diarrhea and/or other morbidities. The interventions were: 1) Fortnightly follow-up care (FFC) at the ICDDR,B’s outpatient-unit, including growth monitoring, health education, and micro-nutrient supplementation (H-C, n=49). 2) FFC at community follow-up unit (CNFU) [established in the existing urban primary health-care centers close to the residence of the child] but received the same regimen as H-C (C-C, n=53). 3) As per C-C plus cereal-based supplementary food (SF) (C-SF, n=49). The SF packets were distributed on recruitment and at every visit in CNFU [@1 packet/day for 6–11 and 2 packets/day for 12-24 month old children. Each packet contained 20g toasted rice-powder, 10g toasted lentil-powder, 5g molasses, and 3g soy bean oil, to provide a total of ~ 150kcal with 11% energy from protein]. 4) As per C-C plus psychosocial stimulation (PS) (C-PS, n=43). PS consisted of child-stimulation and parental-counseling conducted by trained health workers. 5) As per C-C plus both SF+PS (C-SF+PS, n=33). Results: A total of 227children (48.5% female), with a mean ± SD age of 12.6 ±3.8 months, and WHZ of - 2.53±0.28 enrolled. Baseline characteristics did not differ by treatment group. The rate of spontaneous attendance at scheduled follow-up visits gradually decreased in all groups. Follow-up attendance and gain in weight and length were greater in groups C-SF, C-SF+PS, and C-PS than C-C, and these indicators were observed least in H-C. Children in the H-C group more often suffered from diarrhea (25 % vs. 4-9%) and fever (28% vs. 8-11%) than other groups (p < 0.05). Children who attended at least five of the total six scheduled follow-up visits gained more in weight (median: 0.86 vs. 0.62 kg, p=0.002), length (median: 2.4 vs. 2.0 cm, p=0.009) than those who attended fewer. Conclusions: Community-based service delivery, especially including supplementary food with or without psychosocial stimulation, permits better rehabilitation of children with MAM compared to current hospital outpatients-based care. By scaling the community-based follow-up including food supplementation with or without psychosocial stimulation, it will be possible to rehabilitate a greater number of MAM children in a better way.

Keywords: community-based management, moderate acute malnutrition, psychosocial stimulation, supplementary food

Procedia PDF Downloads 417
1421 Preliminary Efficacy of a Pilot Paediatric Day Hospital Program Project to Address Severe Mental Illness, Obesity, and Binge Eating

Authors: Alene Toulany, Elizabeth Dettmer, Seena Grewal, Kaley Roosen, Andrea Regina, Cathleen Steinegger, Kate Stadelman, Melissa Chambers, Lindsay Lochhead, Kelsey Gallagher, Alissa Steinberg, Andrea Leyser, Allison Lougheed, Jill Hamilton

Abstract:

Obesity and psychiatric disorders occur together so frequently that the combination has been coined an epidemic within an epidemic. Youth living with obesity are at increased risk for trauma, depression, anxiety and disordered eating. Although symptoms of binge eating disorder are common in paediatric obesity management programs, they are often not identified or addressed within treatment. At The Hospital for Sick Children (SickKids), a tertiary care paediatric hospital in Toronto, Canada, adolescents with obesity are treated in an interdisciplinary outpatient clinic (1-2 hours/week). This intensity of care is simply not enough to help these extremely complex patients. Existing day treatment programs for eating, and psychiatric disorders are not well suited for patients with obesity. In order to address this identified care gap, a unique collaboration was formed between the obesity, psychiatry, and eating disorder programs at SickKids in 2015. The aim of this collaboration was to provide an enhanced treatment arm to our general psychiatry day hospital program that addresses both the mental health issues and the lifestyle challenges common to youth with obesity and binge eating. The program is currently in year-one of a two-year pilot project and is designed for a length of stay of approximately 6 months. All youth participate in daily group therapy, academics, and structured mealtimes. The groups are primarily skills-based and are informed by cognitive/dialectical behavioural therapies. Weekly family therapy and individual therapy, as well as weekly medical appointments with a psychiatrist and a nurse, are provided. Youth in the enhanced treatment arm also receive regular sessions with a dietitian to establish normalized eating behaviours and monthly multifamily meal sessions to address challenges related to behaviour change and mealtimes in the home. Outcomes that will be evaluated include measures of mental health, anthropometrics, metabolic status, and healthcare satisfaction. At the end of the two years, it is expected that we will have had about 16 youth participants. This model of care delivery will be the first of its kind in Canada and is expected to inform future paediatric treatment practices.

Keywords: adolescent, binge eating, mental illness, obesity

Procedia PDF Downloads 333
1420 Recent Policy Changes in Israeli Early Childhood Frameworks: Hope for the Future

Authors: Yaara Shilo

Abstract:

Early childhood education and care (ECEC)in Israel has undergone extensive reform and now requires daycare centers to meet internationally recognized professional standards. Since 1948, one of the aims of childcare facilities was to enable women’s participation in the workforce.A 1965 law grouped daycare centers for young children with facilities for the elderly and for disabled persons under the same authority. In the 1970’s, ECEC leaders sought to change childcare from proprietary to educational facilities. From 1976 deliberations in the Knesset regarding appropriate attribution of ECEC frameworks resulted in their being moved to various authorities that supported women’s employment: Ministries of Finance, Industry, and Commerce, as well as the Welfare Department. Prior to 2018, 75% of infants and toddlers in institutional care were in unlicensed and unsupervised settings. Legislative processes accompanied the conceptual change to an eventual appropriate attribution of ECEC frameworks. Position papers over the past two decades resulted in recommendations for standards conforming to OECD regulations. Simultaneous incidents of child abuse, some resulting in death, riveted public attention to the need for adequate government supervision, accelerating the legislative process. Appropriate care for very young children must center on quality interactions with caregivers, thus requiring adequate staff training. Finally, in 2018 a law was passed stipulating standards for staff training, proper facilities, child-adult ratios, and safety measures. The Ariav commission expanded training to caregivers for ages 0-3. Transfer of the ECEC to the Ministry of Education ensured establishment of basic training. Groundwork created by new legislation initiated professional development of EC educators for ages 0-3. This process should raise salaries and bolster the system’s ability to attract quality employees. In 2022 responsibility for ECEC ages 0-3 was transferred from the Ministry of Finance to the Ministry of Education, shifting emphasis from proprietary care to professional considerations focusing on wellbeing and early childhood education. The recent revolutionary changes in ECEC point to a new age in the care and education of Israel’s youngest citizens. Implementation of international standards, adequate training, and professionalization of the workforce focus on the child’s needs.

Keywords: policy, early childhood, care and education, daycare, development

Procedia PDF Downloads 97
1419 Unsupervised Detection of Burned Area from Remote Sensing Images Using Spatial Correlation and Fuzzy Clustering

Authors: Tauqir A. Moughal, Fusheng Yu, Abeer Mazher

Abstract:

Land-cover and land-use change information are important because of their practical uses in various applications, including deforestation, damage assessment, disasters monitoring, urban expansion, planning, and land management. Therefore, developing change detection methods for remote sensing images is an important ongoing research agenda. However, detection of change through optical remote sensing images is not a trivial task due to many factors including the vagueness between the boundaries of changed and unchanged regions and spatial dependence of the pixels to its neighborhood. In this paper, we propose a binary change detection technique for bi-temporal optical remote sensing images. As in most of the optical remote sensing images, the transition between the two clusters (change and no change) is overlapping and the existing methods are incapable of providing the accurate cluster boundaries. In this regard, a methodology has been proposed which uses the fuzzy c-means clustering to tackle the problem of vagueness in the changed and unchanged class by formulating the soft boundaries between them. Furthermore, in order to exploit the neighborhood information of the pixels, the input patterns are generated corresponding to each pixel from bi-temporal images using 3×3, 5×5 and 7×7 window. The between images and within image spatial dependence of the pixels to its neighborhood is quantified by using Pearson product moment correlation and Moran’s I statistics, respectively. The proposed technique consists of two phases. At first, between images and within image spatial correlation is calculated to utilize the information that the pixels at different locations may not be independent. Second, fuzzy c-means technique is used to produce two clusters from input feature by not only taking care of vagueness between the changed and unchanged class but also by exploiting the spatial correlation of the pixels. To show the effectiveness of the proposed technique, experiments are conducted on multispectral and bi-temporal remote sensing images. A subset (2100×1212 pixels) of a pan-sharpened, bi-temporal Landsat 5 thematic mapper optical image of Los Angeles, California, is used in this study which shows a long period of the forest fire continued from July until October 2009. Early forest fire and later forest fire optical remote sensing images were acquired on July 5, 2009 and October 25, 2009, respectively. The proposed technique is used to detect the fire (which causes change on earth’s surface) and compared with the existing K-means clustering technique. Experimental results showed that proposed technique performs better than the already existing technique. The proposed technique can be easily extendable for optical hyperspectral images and is suitable for many practical applications.

Keywords: burned area, change detection, correlation, fuzzy clustering, optical remote sensing

Procedia PDF Downloads 156
1418 Coupling of Microfluidic Droplet Systems with ESI-MS Detection for Reaction Optimization

Authors: Julia R. Beulig, Stefan Ohla, Detlev Belder

Abstract:

In contrast to off-line analytical methods, lab-on-a-chip technology delivers direct information about the observed reaction. Therefore, microfluidic devices make an important scientific contribution, e.g. in the field of synthetic chemistry. Herein, the rapid generation of analytical data can be applied for the optimization of chemical reactions. These microfluidic devices enable a fast change of reaction conditions as well as a resource saving method of operation. In the presented work, we focus on the investigation of multiphase regimes, more specifically on a biphasic microfluidic droplet systems. Here, every single droplet is a reaction container with customized conditions. The biggest challenge is the rapid qualitative and quantitative readout of information as most detection techniques for droplet systems are non-specific, time-consuming or too slow. An exception is the electrospray mass spectrometry (ESI-MS). The combination of a reaction screening platform with a rapid and specific detection method is an important step in droplet-based microfluidics. In this work, we present a novel approach for synthesis optimization on the nanoliter scale with direct ESI-MS detection. The development of a droplet-based microfluidic device, which enables the modification of different parameters while simultaneously monitoring the effect on the reaction within a single run, is shown. By common soft- and photolithographic techniques a polydimethylsiloxane (PDMS) microfluidic chip with different functionalities is developed. As an interface for the MS detection, we use a steel capillary for ESI and improve the spray stability with a Teflon siphon tubing, which is inserted underneath the steel capillary. By optimizing the flow rates, it is possible to screen parameters of various reactions, this is exemplarity shown by a Domino Knoevenagel Hetero-Diels-Alder reaction. Different starting materials, catalyst concentrations and solvent compositions are investigated. Due to the high repetition rate of the droplet production, each set of reaction condition is examined hundreds of times. As a result, of the investigation, we receive possible reagents, the ideal water-methanol ratio of the solvent and the most effective catalyst concentration. The developed system can help to determine important information about the optimal parameters of a reaction within a short time. With this novel tool, we make an important step on the field of combining droplet-based microfluidics with organic reaction screening.

Keywords: droplet, mass spectrometry, microfluidics, organic reaction, screening

Procedia PDF Downloads 279
1417 Energy Efficiency Measures in Canada’s Iron and Steel Industry

Authors: A. Talaei, M. Ahiduzzaman, A. Kumar

Abstract:

In Canada, an increase in the production of iron and steel is anticipated for satisfying the increasing demand of iron and steel in the oil sands and automobile industries. It is predicted that GHG emissions from iron and steel sector will show a continuous increase till 2030 and, with emissions of 20 million tonnes of carbon dioxide equivalent, the sector will account for more than 2% of total national GHG emissions, or 12% of industrial emissions (i.e. 25% increase from 2010 levels). Therefore, there is an urgent need to improve the energy intensity and to implement energy efficiency measures in the industry to reduce the GHG footprint. This paper analyzes the current energy consumption in the Canadian iron and steel industries and identifies energy efficiency opportunities to improve the energy intensity and mitigate greenhouse gas emissions from this industry. In order to do this, a demand tree is developed representing different iron and steel production routs and the technologies within each rout. The main energy consumer within the industry is found to be flared heaters accounting for 81% of overall energy consumption followed by motor system and steam generation each accounting for 7% of total energy consumption. Eighteen different energy efficiency measures are identified which will help the efficiency improvement in various subsector of the industry. In the sintering process, heat recovery from coolers provides a high potential for energy saving and can be integrated in both new and existing plants. Coke dry quenching (CDQ) has the same advantages. Within the blast furnace iron-making process, injection of large amounts of coal in the furnace appears to be more effective than any other option in this category. In addition, because coal-powered electricity is being phased out in Ontario (where the majority of iron and steel plants are located) there will be surplus coal that could be used in iron and steel plants. In the steel-making processes, the recovery of Basic Oxygen Furnace (BOF) gas and scrap preheating provides considerable potential for energy savings in BOF and Electric Arc Furnace (EAF) steel-making processes, respectively. However, despite the energy savings potential, the BOF gas recovery is not applicable in existing plants using steam recovery processes. Given that the share of EAF in steel production is expected to increase the application potential of the technology will be limited. On the other hand, the long lifetime of the technology and the expected capacity increase of EAF makes scrap preheating a justified energy saving option. This paper would present the results of the assessment of the above mentioned options in terms of the costs and GHG mitigation potential.

Keywords: Iron and Steel Sectors, Energy Efficiency Improvement, Blast Furnace Iron-making Process, GHG Mitigation

Procedia PDF Downloads 384
1416 Adaptation of Hough Transform Algorithm for Text Document Skew Angle Detection

Authors: Kayode A. Olaniyi, Olabanji F. Omotoye, Adeola A. Ogunleye

Abstract:

The skew detection and correction form an important part of digital document analysis. This is because uncompensated skew can deteriorate document features and can complicate further document image processing steps. Efficient text document analysis and digitization can rarely be achieved when a document is skewed even at a small angle. Once the documents have been digitized through the scanning system and binarization also achieved, document skew correction is required before further image analysis. Research efforts have been put in this area with algorithms developed to eliminate document skew. Skew angle correction algorithms can be compared based on performance criteria. Most important performance criteria are accuracy of skew angle detection, range of skew angle for detection, speed of processing the image, computational complexity and consequently memory space used. The standard Hough Transform has successfully been implemented for text documentation skew angle estimation application. However, the standard Hough Transform algorithm level of accuracy depends largely on how much fine the step size for the angle used. This consequently consumes more time and memory space for increase accuracy and, especially where number of pixels is considerable large. Whenever the Hough transform is used, there is always a tradeoff between accuracy and speed. So a more efficient solution is needed that optimizes space as well as time. In this paper, an improved Hough transform (HT) technique that optimizes space as well as time to robustly detect document skew is presented. The modified algorithm of Hough Transform presents solution to the contradiction between the memory space, running time and accuracy. Our algorithm starts with the first step of angle estimation accurate up to zero decimal place using the standard Hough Transform algorithm achieving minimal running time and space but lacks relative accuracy. Then to increase accuracy, suppose estimated angle found using the basic Hough algorithm is x degree, we then run again basic algorithm from range between ±x degrees with accuracy of one decimal place. Same process is iterated till level of desired accuracy is achieved. The procedure of our skew estimation and correction algorithm of text images is implemented using MATLAB. The memory space estimation and process time are also tabulated with skew angle assumption of within 00 and 450. The simulation results which is demonstrated in Matlab show the high performance of our algorithms with less computational time and memory space used in detecting document skew for a variety of documents with different levels of complexity.

Keywords: hough-transform, skew-detection, skew-angle, skew-correction, text-document

Procedia PDF Downloads 136
1415 High Throughput Virtual Screening against ns3 Helicase of Japanese Encephalitis Virus (JEV)

Authors: Soma Banerjee, Aamen Talukdar, Argha Mandal, Dipankar Chaudhuri

Abstract:

Japanese Encephalitis is a major infectious disease with nearly half the world’s population living in areas where it is prevalent. Currently, treatment for it involves only supportive care and symptom management through vaccination. Due to the lack of antiviral drugs against Japanese Encephalitis Virus (JEV), the quest for such agents remains a priority. For these reasons, simulation studies of drug targets against JEV are important. Towards this purpose, docking experiments of the kinase inhibitors were done against the chosen target NS3 helicase as it is a nucleoside binding protein. Previous efforts regarding computational drug design against JEV revealed some lead molecules by virtual screening using public domain software. To be more specific and accurate regarding finding leads, in this study a proprietary software Schrödinger-GLIDE has been used. Druggability of the pockets in the NS3 helicase crystal structure was first calculated by SITEMAP. Then the sites were screened according to compatibility with ATP. The site which is most compatible with ATP was selected as target. Virtual screening was performed by acquiring ligands from databases: KinaseSARfari, KinaseKnowledgebase and Published inhibitor Set using GLIDE. The 25 ligands with best docking scores from each database were re-docked in XP mode. Protein structure alignment of NS3 was performed using VAST against MMDB, and similar human proteins were docked to all the best scoring ligands. The low scoring ligands were chosen for further studies and the high scoring ligands were screened. Seventy-three ligands were listed as the best scoring ones after performing HTVS. Protein structure alignment of NS3 revealed 3 human proteins with RMSD values lesser than 2Å. Docking results with these three proteins revealed the inhibitors that can interfere and inhibit human proteins. Those inhibitors were screened. Among the ones left, those with docking scores worse than a threshold value were also removed to get the final hits. Analysis of the docked complexes through 2D interaction diagrams revealed the amino acid residues that are essential for ligand binding within the active site. Interaction analysis will help to find a strongly interacting scaffold among the hits. This experiment yielded 21 hits with the best docking scores which could be investigated further for their drug like properties. Aside from getting suitable leads, specific NS3 helicase-inhibitor interactions were identified. Selection of Target modification strategies complementing docking methodologies which can result in choosing better lead compounds are in progress. Those enhanced leads can lead to better in vitro testing.

Keywords: antivirals, docking, glide, high-throughput virtual screening, Japanese encephalitis, ns3 helicase

Procedia PDF Downloads 211