Search results for: full scale tests
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11782

Search results for: full scale tests

1642 Clinical Features, Diagnosis and Treatment Outcomes in Necrotising Autoimmune Myopathy: A Rare Entity in the Spectrum of Inflammatory Myopathies

Authors: Tamphasana Wairokpam

Abstract:

Inflammatory myopathies (IMs) have long been recognised as a heterogenous family of myopathies with acute, subacute, and sometimes chronic presentation and are potentially treatable. Necrotizing autoimmune myopathies (NAM) are a relatively new subset of myopathies. Patients generally present with subacute onset of proximal myopathy and significantly elevated creatinine kinase (CK) levels. It is being increasingly recognised that there are limitations to the independent diagnostic utility of muscle biopsy. Immunohistochemistry tests may reveal important information in these cases. The traditional classification of IMs failed to recognise NAM as a separate entity and did not adequately emphasize the diversity of IMs. This review and case report on NAM aims to highlight the heterogeneity of this entity and focus on the distinct clinical presentation, biopsy findings, specific auto-antibodies implicated, and available treatment options with prognosis. This article is a meta-analysis of literatures on NAM and a case report illustrating the clinical course, investigation and biopsy findings, antibodies implicated, and management of a patient with NAM. The main databases used for the search were Pubmed, Google Scholar, and Cochrane Library. Altogether, 67 publications have been taken as references. Two biomarkers, anti-signal recognition protein (SRP) and anti- hydroxyl methylglutaryl-coenzyme A reductase (HMGCR) Abs, have been found to have an association with NAM in about 2/3rd of cases. Interestingly, anti-SRP associated NAM appears to be more aggressive in its clinical course when compared to its anti-HMGCR associated counterpart. Biopsy shows muscle fibre necrosis without inflammation. There are reports of statin-induced NAM where progression of myopathy has been seen even after discontinuation of statins, pointing towards an underlying immune mechanism. Diagnosisng NAM is essential as it requires more aggressive immunotherapy than other types of IMs. Most cases are refractory to corticosteroid monotherapy. Immunosuppressive therapy with other immunotherapeutic agents such as IVIg, rituximab, mycophenolate mofetil, azathioprine has been explored and found to have a role in the treatment of NAM. In conclusion,given the heterogeneity of NAM, it appears that NAM is not just a single entity but consists of many different forms, despite the similarities in presentation and its classification remains an evolving field. A thorough understanding of underlying mechanism and the clinical correlation with antibodies associated with NAM is essential for efficacious management and disease prognostication.

Keywords: inflammatory myopathies, necrotising autoimmune myopathies, anti-SRP antibody, anti-HMGCR antibody, statin induced myopathy

Procedia PDF Downloads 100
1641 An Effort at Improving Reliability of Laboratory Data in Titrimetric Analysis for Zinc Sulphate Tablets Using Validated Spreadsheet Calculators

Authors: M. A. Okezue, K. L. Clase, S. R. Byrn

Abstract:

The requirement for maintaining data integrity in laboratory operations is critical for regulatory compliance. Automation of procedures reduces incidence of human errors. Quality control laboratories located in low-income economies may face some barriers in attempts to automate their processes. Since data from quality control tests on pharmaceutical products are used in making regulatory decisions, it is important that laboratory reports are accurate and reliable. Zinc Sulphate (ZnSO4) tablets is used in treatment of diarrhea in pediatric population, and as an adjunct therapy for COVID-19 regimen. Unfortunately, zinc content in these formulations is determined titrimetrically; a manual analytical procedure. The assay for ZnSO4 tablets involves time-consuming steps that contain mathematical formulae prone to calculation errors. To achieve consistency, save costs, and improve data integrity, validated spreadsheets were developed to simplify the two critical steps in the analysis of ZnSO4 tablets: standardization of 0.1M Sodium Edetate (EDTA) solution, and the complexometric titration assay procedure. The assay method in the United States Pharmacopoeia was used to create a process flow for ZnSO4 tablets. For each step in the process, different formulae were input into two spreadsheets to automate calculations. Further checks were created within the automated system to ensure validity of replicate analysis in titrimetric procedures. Validations were conducted using five data sets of manually computed assay results. The acceptance criteria set for the protocol were met. Significant p-values (p < 0.05, α = 0.05, at 95% Confidence Interval) were obtained from students’ t-test evaluation of the mean values for manual-calculated and spreadsheet results at all levels of the analysis flow. Right-first-time analysis and principles of data integrity were enhanced by use of the validated spreadsheet calculators in titrimetric evaluations of ZnSO4 tablets. Human errors were minimized in calculations when procedures were automated in quality control laboratories. The assay procedure for the formulation was achieved in a time-efficient manner with greater level of accuracy. This project is expected to promote cost savings for laboratory business models.

Keywords: data integrity, spreadsheets, titrimetry, validation, zinc sulphate tablets

Procedia PDF Downloads 167
1640 A Robust Optimization of Chassis Durability/Comfort Compromise Using Chebyshev Polynomial Chaos Expansion Method

Authors: Hanwei Gao, Louis Jezequel, Eric Cabrol, Bernard Vitry

Abstract:

The chassis system is composed of complex elements that take up all the loads from the tire-ground contact area and thus it plays an important role in numerous specifications such as durability, comfort, crash, etc. During the development of new vehicle projects in Renault, durability validation is always the main focus while deployment of comfort comes later in the project. Therefore, sometimes design choices have to be reconsidered because of the natural incompatibility between these two specifications. Besides, robustness is also an important point of concern as it is related to manufacturing costs as well as the performance after the ageing of components like shock absorbers. In this paper an approach is proposed aiming to realize a multi-objective optimization between chassis endurance and comfort while taking the random factors into consideration. The adaptive-sparse polynomial chaos expansion method (PCE) with Chebyshev polynomial series has been applied to predict responses’ uncertainty intervals of a system according to its uncertain-but-bounded parameters. The approach can be divided into three steps. First an initial design of experiments is realized to build the response surfaces which represent statistically a black-box system. Secondly within several iterations an optimum set is proposed and validated which will form a Pareto front. At the same time the robustness of each response, served as additional objectives, is calculated from the pre-defined parameter intervals and the response surfaces obtained in the first step. Finally an inverse strategy is carried out to determine the parameters’ tolerance combination with a maximally acceptable degradation of the responses in terms of manufacturing costs. A quarter car model has been tested as an example by applying the road excitations from the actual road measurements for both endurance and comfort calculations. One indicator based on the Basquin’s law is defined to compare the global chassis durability of different parameter settings. Another indicator related to comfort is obtained from the vertical acceleration of the sprung mass. An optimum set with best robustness has been finally obtained and the reference tests prove a good robustness prediction of Chebyshev PCE method. This example demonstrates the effectiveness and reliability of the approach, in particular its ability to save computational costs for a complex system.

Keywords: chassis durability, Chebyshev polynomials, multi-objective optimization, polynomial chaos expansion, ride comfort, robust design

Procedia PDF Downloads 149
1639 The Evaluation of the Re-Construction Project Hamamönü, Ankara in Turkey as a Case from Socio-Cultural Perspective

Authors: Tuğçe Kök, Gözen Güner Aktaş, Nur Ayalp

Abstract:

In a global world, Social and cultural sustainability are subjects which have gained significant importance in recent years. The concept of sustainability was included in the document of the World Conservation Union (IUCN) by World Charter for Nature, adopted in 1982 for the first time. However, merged with urban sustainability a new phenomenon has emerged. Sustainability is an essential fact, This fact is discussed via the socio-cultural field of sustainability. Together with central government and local authorities, conservation activities have been intensified on the protection of values on an area scale. Today, local authorities play an important role in the urban historic site rehabilitation and re-construction of traditional houses projects in Ankara, Turkey. Many conservative acts have occurred after 1980’s. To give a remarkable example about the conservation implementations of traditional Turkish houses is ‘Hamamönü, Ankara Re-Construction Project which is one of the historical parts that has suffered from deterioration and unplanned urban development. In this region, preexisting but unused historic fibre of the site has been revised and according to result of this case-study, the relationship between users and re-construction were discussed. Most of the houses were re-constructed in order to build a new tourist attraction area. This study discusses the socio-cultural relations between the new built environment and the visitors, from the point of cultural sustainability. This study questions the transmission of cultural stimulations. A case study was conducted to discuss the perception of cultural aspects of the visitors in the site. The relationship between the real cultural identities and existent ones after the re-constructed project, Which has been transmitted through the visitors and the users of those spaces will be discussed. The aim of the study is to analyze the relation between the cultural identities, which have been tried to be protected with the re-construction project and the users. The purposes of this study are to evaluate the implementations of Altındağ Municipality in Hamamönü and examine the socio-cultural sustainability with the user responses. After the assessment of implementation under socio-cultural sustainability, some proposals for the future of Hamamönü were introduced.

Keywords: social sustainability, cultural sustainability, Hamamönü, Turkey, re-construction

Procedia PDF Downloads 474
1638 Effect of Sulphur Concentration on Microbial Population and Performance of a Methane Biofilter

Authors: Sonya Barzgar, J. Patrick, A. Hettiaratchi

Abstract:

Methane (CH4) is reputed as the second largest contributor to greenhouse effect with a global warming potential (GWP) of 34 related to carbon dioxide (CO2) over the 100-year horizon, so there is a growing interest in reducing the emissions of this gas. Methane biofiltration (MBF) is a cost effective technology for reducing low volume point source emissions of methane. In this technique, microbial oxidation of methane is carried out by methane-oxidizing bacteria (methanotrophs) which use methane as carbon and energy source. MBF uses a granular medium, such as soil or compost, to support the growth of methanotrophic bacteria responsible for converting methane to carbon dioxide (CO₂) and water (H₂O). Even though the biofiltration technique has been shown to be an efficient, practical and viable technology, the design and operational parameters, as well as the relevant microbial processes have not been investigated in depth. In particular, limited research has been done on the effects of sulphur on methane bio-oxidation. Since bacteria require a variety of nutrients for growth, to improve the performance of methane biofiltration, it is important to establish the input quantities of nutrients to be provided to the biofilter to ensure that nutrients are available to sustain the process. The study described in this paper was conducted with the aim of determining the influence of sulphur on methane elimination in a biofilter. In this study, a set of experimental measurements has been carried out to explore how the conversion of elemental sulphur could affect methane oxidation in terms of methanotrophs growth and system pH. Batch experiments with different concentrations of sulphur were performed while keeping the other parameters i.e. moisture content, methane concentration, oxygen level and also compost at their optimum level. The study revealed the tolerable limit of sulphur without any interference to the methane oxidation as well as the particular sulphur concentration leading to the greatest methane elimination capacity. Due to the sulphur oxidation, pH varies in a transient way which affects the microbial growth behavior. All methanotrophs are incapable of growth at pH values below 5.0 and thus apparently are unable to oxidize methane. Herein, the certain pH for the optimal growth of methanotrophic bacteria is obtained. Finally, monitoring methane concentration over time in the presence of sulphur is also presented for laboratory scale biofilters.

Keywords: global warming, methane biofiltration (MBF), methane oxidation, methanotrophs, pH, sulphur

Procedia PDF Downloads 231
1637 Conservation Challenges of Fish and Fisheries in Lake Tana, Ethiopia

Authors: Shewit Kidane, Abebe Getahun, Wassie Anteneh, Admassu Demeke, Peter Goethals

Abstract:

We have reviewed major findings of scientific studies on Lake Tana fish resources and their threats. The aim was to provide summarized information for all concerned bodies and international readers to get full and comprehensive picture about the lake’s fish resource and conservation problems. The Lake Tana watershed comprise 28 fish species, of which 21 are endemic. Moreover, Lake Tana is the one among the top 250 lake regions of global importance for biodiversity and it is world recognized migratory birds wintering site. Lake Tana together with its adjacent wetlands provide directly and indirectly a livelihood for more than 500,000 people. However, owing to anthropogenic activities, the lake ecosystem as well as fish and attributes of the fisheries sector are severely degraded. Fish species in Lake Tana are suffering due to illegal fishing, damming, habitat/breeding ground degradation, wastewater disposal, introduction of exotic species, and lack of implementing fisheries regulations. Currently, more than 98% of fishers in Lake Tana are using the most destructive monofilament. Indeed, dams, irrigation schemes and hydropower are constructed in response to the emerging development need only. Mitigation techniques such as construction of fish ladders for the migratory fishes are the most forgotten. In addition, water resource developers are likely unaware of both the importance of the fisheries and the impact of dam construction on fish. As a result, the biodiversity issue is often missed. Besides, Lake Tana wetlands, which play vital role to sustain biodiversity, are not wisely utilised in the sense of the Ramsar Convention’s definition. Wetlands are considered as unhealthy and hence wetland conversion for the purpose of recession agriculture is still seen as advanced mode of development. As a result, many wetlands in the lake watershed are shrinking drastically over time and Cyprus papyrus, one of the characteristic features of Lake Tana, has dramatically declined in its distribution with some local extinction. Furthermore, the recently introduced water hyacinth (Eichhornia crassipes) is creating immense problems on the lake ecosystem. Moreover, currently, 1.56 million tons of sediment have deposited into the lake each year and wastes from the industries and residents are directly discharged into the lake without treatment. Recently, sign of eutrophication is revealed in Lake Tana and most coarsely, the incidence of cyanobacteria genus Microcystis was reported from the Bahir Dar Gulf of Lake Tana. Thus, the direct dependency of the communities on the lake water for drinking as well as to wash their body and clothes and its fisheries make the problem worst. Indeed, since it is home to many endemic migratory fish, such kind of unregulated developmental activities could be detrimental to their stocks. This can be best illustrated by the drastic stock reduction (>75% in biomass) of the world unique Labeobarbus species. So, unless proper management is put in place, the anthropogenic impacts can jeopardize the aquatic ecosystems. Therefore, in order to sustainably use the aquatic resources and fulfil the needs of the local people, every developmental activity and resource utilization should be carried out adhering to the available policies.

Keywords: anthropogenic impacts, dams, endemic fish, wetland degradation

Procedia PDF Downloads 241
1636 Links between Moral Distress of Registered Nurses and Factors Related to Patient Care at the End of Their Life: A Cross Sectional Survey

Authors: L. Laurs, A. Blazeviciene, D. Milonas

Abstract:

Introduction: Nursing as a profession is grounded in moral obligation. Nursing practice is grounded in ethical standards: to not harm, to promote justice, to be accountable, and to provide safe and competent care. The nature of the nurse-patient therapeutic relationship requires acting on the patient's behalf. Moral distress consists of negative stress symptoms that occur in situations that involve ethical situations that the nurse perceives as discordant with their professional values. Aim of the Study: The purpose of this study was to assess links between moral distress of registered nurses and factors related to patient care at the end of their life. Methods and Sample: A descriptive, cross-sectional, correlational design was applied in this study. Registered nurses were recruited from seven municipal multi-profile hospitals providing both general and specialized healthcare services in Lithuania (N=1055). Research instruments included two questionnaires: Obstacles and Facilitating at the End of Life Care and Moral Distress Scale (revised). Results: Spearman’s correlation analysis was performed to assess the relationship between nurses' attitudes towards patient care at the end of life and the experienced moral distress. A statistically significant correlation between moral distress and the following factors related to patient end-of-life care has been identified: conversations with physicians on patient end-of-life problems have a positive impact on job satisfaction; some patients may be excluded from decisions about their treatment and nursing because they are questioned about their ability to assess the situation. These situations increased moral distress. Patient consciousness should not be permanently suppressed by calming medications, and the patient should be provided with all nursing care services and moral distress. Conclusions: The moral distress of nurses is significantly related to the end-of-life care of patients and their determinants: moral distress increased due to lack of discussion with doctors about problem-solving and exclusion of patients from decision-making. And it diminished by refusing calming medications to permanently suppress a patient's consciousness and providing good care for patients.

Keywords: moral distress, registered nurses, end of life, care

Procedia PDF Downloads 108
1635 Current Situation and Need in Learning Management for Developing the Analytical Thinking of Teachers in Basic Education of Thailand

Authors: S. Art-in

Abstract:

This research was a survey research. The objective of this study was to study current situation and need in learning management for developing the analytical thinking of teachers in basic education of Thailand. The target group consisted of 400 teachers teaching in basic education level. They were selected by multi-stage random sampling. The instrument used in this study was the questionnaire asking current situation and need in learning management for developing the analytical thinking, 5 level rating scale. Data were analyzed by calculating the frequency, mean, standard deviation, percentage and content analysis. The research found that: 1) For current situation, the teachers provided learning management for developing analytical thinking, in overall, in “high” level. The issue with lowest level of practice: the teachers had competency in designing and establishing the learning management plan for developing the students’ analytical thinking. Considering each aspect it was found that: 1.1) the teacher aspect; the issue with lowest level of practice was: the teachers had competency in designing and establishing the learning management plan for developing the students’ analytical thinking, and 1.2) the learning management aspect for developing the students’ analytical thinking, the issue with lowest level of practice was: the learning activities provided opportunity for students to evaluate their analytical thinking process in each learning session. 2) The teachers showed their need in learning management for developing the analytical thinking, in overall, in “the highest” level. The issue with highest level of the need was: to obtain knowledge and competency in model, technique, and method for learning management or steps of learning management for developing the students’ analytical thinking. Considering each aspect it was found that: 2.1) teacher aspect; the issue with highest level of the need was: to obtain knowledge and comprehension in model, technique, and method for learning management or steps of learning management for developing the students’ analytical thinking, and 2.2) learning management aspect for developing the analytical thinking, the issue with highest level of need consisted of the determination of learning activities as problem situation, and the opportunity for students to comprehend the problem situation as well as practice their analytical thinking in order to find the answer.

Keywords: current situation and need, learning management, analytical thinking, teachers in basic education level, Thailand

Procedia PDF Downloads 349
1634 Predictive Relationship between Motivation Strategies and Musical Creativity of Secondary School Music Students

Authors: Lucy Lugo Mawang

Abstract:

Educational Psychologists have highlighted the significance of creativity in education. Likewise, a fundamental objective of music education concern the development of students’ musical creativity potential. The purpose of this study was to determine the relationship between motivation strategies and musical creativity, and establish the prediction equation of musical creativity. The study used purposive sampling and census to select 201 fourth-form music students (139 females/ 62 males), mainly from public secondary schools in Kenya. The mean age of participants was 17.24 years (SD = .78). Framed upon self- determination theory and the dichotomous model of achievement motivation, the study adopted an ex post facto research design. A self-report measure, the Achievement Goal Questionnaire-Revised (AGQ-R) was used in data collection for the independent variable. Musical creativity was based on a creative music composition task and measured by the Consensual Musical Creativity Assessment Scale (CMCAS). Data collected in two separate sessions within an interval of one month. The questionnaire was administered in the first session, lasting approximately 20 minutes. The second session was for notation of participants’ creative composition. The results indicated a positive correlation r(199) = .39, p ˂ .01 between musical creativity and intrinsic music motivation. Conversely, negative correlation r(199) = -.19, p < .01 was observed between musical creativity and extrinsic music motivation. The equation for predicting musical creativity from music motivation strategies was significant F(2, 198) = 20.8, p < .01, with R2 = .17. Motivation strategies accounted for approximately (17%) of the variance in participants’ musical creativity. Intrinsic music motivation had the highest significant predictive value (β = .38, p ˂ .01) on musical creativity. In the exploratory analysis, a significant mean difference t(118) = 4.59, p ˂ .01 in musical creativity for intrinsic and extrinsic music motivation was observed in favour of intrinsically motivated participants. Further, a significant gender difference t(93.47) = 4.31, p ˂ .01 in musical creativity was observed, with male participants scoring higher than females. However, there was no significant difference in participants’ musical creativity based on age. The study recommended that music educators should strive to enhance intrinsic music motivation among students. Specifically, schools should create conducive environments and have interventions for the development of intrinsic music motivation since it is the most facilitative motivation strategy in predicting musical creativity.

Keywords: extrinsic music motivation, intrinsic music motivation, musical creativity, music composition

Procedia PDF Downloads 150
1633 The Effects of Lighting Environments on the Perception and Psychology of Consumers of Different Genders in a 3C Retail Store

Authors: Yu-Fong Lin

Abstract:

The main purpose of this study is to explore the impact of different lighting arrangements that create different visual environments in a 3C retail store on the perception, psychology, and shopping tendencies of consumers of different genders. In recent years, the ‘emotional shopping’ model has been widely accepted in the consumer market; in addition to the emotional meaning and value of a product, the in-store ‘shopping atmosphere’ has also been increasingly regarded as significant. The lighting serves as an important environmental stimulus that influences the atmosphere of a store. Altering the lighting can change the color, the shape, and the atmosphere of a space. A successful retail lighting design can not only attract consumers’ attention and generate their interest in various goods, but it can also affect consumers’ shopping approach, behavior, and desires. 3C electronic products have become mainstream in the current consumer market. Consumers of different genders may demonstrate different behaviors and preferences within a 3C store environment. This study tests the impact of a combination of lighting contrasts and color temperatures in a 3C retail store on the visual perception and psychological reactions of consumers of different genders. The research design employs an experimental method to collect data from subjects and then uses statistical analysis adhering to a 2 x 2 x 2 factorial design to identify the influences of different lighting environments. This study utilizes virtual reality technology as the primary method by which to create four virtual store lighting environments. The four lighting conditions are as follows: high contrast/cool tone, high contrast/warm tone, low contrast/cool tone, and low contrast/warm tone. Differences in the virtual lighting and the environment are used to test subjects’ visual perceptions, emotional reactions, store satisfaction, approach-avoidance intentions, and spatial atmosphere preferences. The findings of our preliminary test indicate that female subjects have a higher pleasure response than male subjects in a 3C retail store. Based on the findings of our preliminary test, the researchers modified the contents of the questionnaires and the virtual 3C retail environment with different lighting conditions in order to conduct the final experiment. The results will provide information about the effects of retail lighting on the environmental psychology and the psychological reactions of consumers of different genders in a 3C retail store lighting environment. These results will enable useful practical guidelines about creating 3C retail store lighting and atmosphere for retailers and interior designers to be established.

Keywords: 3C retail store, environmental stimuli, lighting, virtual reality

Procedia PDF Downloads 388
1632 Distinguishing between Bacterial and Viral Infections Based on Peripheral Human Blood Tests Using Infrared Microscopy and Multivariate Analysis

Authors: H. Agbaria, A. Salman, M. Huleihel, G. Beck, D. H. Rich, S. Mordechai, J. Kapelushnik

Abstract:

Viral and bacterial infections are responsible for variety of diseases. These infections have similar symptoms like fever, sneezing, inflammation, vomiting, diarrhea and fatigue. Thus, physicians may encounter difficulties in distinguishing between viral and bacterial infections based on these symptoms. Bacterial infections differ from viral infections in many other important respects regarding the response to various medications and the structure of the organisms. In many cases, it is difficult to know the origin of the infection. The physician orders a blood, urine test, or 'culture test' of tissue to diagnose the infection type when it is necessary. Using these methods, the time that elapses between the receipt of patient material and the presentation of the test results to the clinician is typically too long ( > 24 hours). This time is crucial in many cases for saving the life of the patient and for planning the right medical treatment. Thus, rapid identification of bacterial and viral infections in the lab is of great importance for effective treatment especially in cases of emergency. Blood was collected from 50 patients with confirmed viral infection and 50 with confirmed bacterial infection. White blood cells (WBCs) and plasma were isolated and deposited on a zinc selenide slide, dried and measured under a Fourier transform infrared (FTIR) microscope to obtain their infrared absorption spectra. The acquired spectra of WBCs and plasma were analyzed in order to differentiate between the two types of infections. In this study, the potential of FTIR microscopy in tandem with multivariate analysis was evaluated for the identification of the agent that causes the human infection. The method was used to identify the infectious agent type as either bacterial or viral, based on an analysis of the blood components [i.e., white blood cells (WBC) and plasma] using their infrared vibrational spectra. The time required for the analysis and evaluation after obtaining the blood sample was less than one hour. In the analysis, minute spectral differences in several bands of the FTIR spectra of WBCs were observed between groups of samples with viral and bacterial infections. By employing the techniques of feature extraction with linear discriminant analysis (LDA), a sensitivity of ~92 % and a specificity of ~86 % for an infection type diagnosis was achieved. The present preliminary study suggests that FTIR spectroscopy of WBCs is a potentially feasible and efficient tool for the diagnosis of the infection type.

Keywords: viral infection, bacterial infection, linear discriminant analysis, plasma, white blood cells, infrared spectroscopy

Procedia PDF Downloads 220
1631 Investigation on Single Nucleotide Polymorphism in Candidate Genes and Their Association with Occurrence of Mycobacterium avium Subspecies Paratuberculosis Infection in Cattle

Authors: Ran Vir Singh, Anuj Chauhan, Subhodh Kumar, Rajesh Rathore, Satish Kumar, B Gopi, Sushil Kumar, Tarun Kumar, Ramji Yadav, Donna Phangchopi, Shoor Vir Singh

Abstract:

Paratuberculosis caused by Mycobacterium avium subspecies paratuberculosis (MAP) is a chronic granulomatous enteritis affecting ruminants. It is responsible for significant economic losses in livestock industry worldwide. This organism is also of public health concern due to an unconfirmed link to Crohn’s disease. Susceptibility to paratuberculosis has been suggested to have genetic component with low to moderate heritability. Number of SNPs in various candidates genes have been observed to be affecting the susceptibility toward paratuberculosis. The objective of this study was to explore the association of various SNPs in the candidate genes and QTL region with MAP. A total of 117 SNPs from SLC11A1, IFNG, CARD15, TLR2, TLR4, CLEC7A, CD209, SP110, ANKARA2, PGLYRP1 and one QTL were selected for study. A total of 1222 cattle from various organized herds, gauhsalas and farmer herds were screened for MAP infection by Johnin intradermal skin test, AGID, serum ELISA, fecal microscopy, fecal culture and IS900 blood PCR. Based on the results of these tests, a case and control population of 200 and 183 respectively was established for study. A total of 117 SNPs from 10 candidate genes and one QTL were selected and validated/tested in our case and control population by PCR-RFLP technique. Data was analyzed using SAS 9.3 software. Statistical analysis revealed that, 107 out of 117 SNPs were not significantly associated with occurrence of MAP. Only SNP rs55617172 of TLR2, rs8193046 and rs8193060 of TLR4, rs110353594 and rs41654445 of CLEC7A, rs208814257of CD209, rs41933863 of ANKRA2, two loci {SLC11A1(53C/G)} and {IFNG (185 G/r) } and SNP rs41945014 in QTL region was significantly associated with MAP. Six SNP from 10 significant SNPs viz., rs110353594 and rs41654445 from CLEC7A, rs8193046 and rs8193060 from TLR4, rs109453173 from SLC11A1 rs208814257 from CD209 were validated in new case and control population. Out of these only one SNP rs8193046 of TLR4 gene was found significantly associated with occurrence of MAP in cattle. ODD ratio indicates that animals with AG genotype were more susceptible to MAP and this finding is in accordance with the earlier report. Hence it reaffirms that AG genotype can serve as a reliable genetic marker for indentifying more susceptible cattle in future selection against MAP infection in cattle.

Keywords: SNP, candidate genes, paratuberculosis, cattle

Procedia PDF Downloads 351
1630 Periplasmic Expression of Anti-RoxP Antibody Fragments in Escherichia Coli.

Authors: Caspar S. Carson, Gabriel W. Prather, Nicholas E. Wong, Jeffery R. Anton, William H. McCoy

Abstract:

Cutibacterium acnes is a commensal bacterium found on human skin that has been linked to acne. C. acnes can also be an opportunistic pathogen when it infiltrates the body during surgery. This pathogen can cause dangerous infections of medical implants, such as shoulder replacements, leading to life-threatening blood infections. Compounding this issue, C. acnes resistance to many antibiotics has become an increasing problem worldwide, creating a need for special forms of treatment. C. acnes expresses the protein RoxP, and it requires this protein to colonize human skin. Though this protein is required for C. acnes skin colonization, its function is not yet understood. Inhibition of RoxP function might be an effective treatment for C. acnes infections. To develop such reagents, the McCoy Laboratory generated four unique anti-RoxP antibodies. Preliminary studies in the McCoy Lab have established that each antibody binds a distinct site on RoxP. To assess the potential of these antibodies as therapeutics, it is necessary to specifically characterize these antibody epitopes and evaluate them in assays that assess their ability to inhibit RoxP-dependent C. acnes growth. To provide material for these studies, an antibody expression construct, Fv-clasp(v2), was adapted to encode anti-RoxP antibody sequences. The author hypothesizes that this expression strategy can produce sufficient amounts of >95% pure antibody fragments for further characterization of these antibodies. Four anti-RoxP Fv-clasp(v2) expression constructs (pET vector-based) were transformed into E. coli BL21-Gold(DE3) cells and a small-scale expression and purification trial was performed for each construct to evaluate anti-RoxP Fv-clasp(v2) yield and purity. Successful expression and purification of these antibody constructs will allow for their use in structural studies, such as protein crystallography and cryogenic electron microscopy. Such studies would help to define the antibody binding sites on RoxP, which could then be leveraged in the development of certain methods to treat C. acnes infection through RoxP inhibition.

Keywords: structural biology, protein expression, infectious disease, antibody, therapeutics, E. coli

Procedia PDF Downloads 53
1629 Cosmetic Value of Collatamp in Breast Conserving Surgery

Authors: Chee Young Kim, Tae Hyun Kim, Anbok Lee, Hyun-Ah Kim, Woosung Lim, Ku Sang Kim, Jinsun Lee, Yoo Seok Kim, Beom Seok Ko

Abstract:

Background: CollatampTM is Gentamicin-containing collagen sponge well known for its hemostatic effect, commonly utilized in surgeries. We inserted CollatempTM wrapped by SurgicelTM (oxidized cellulose polymer) to fill up the defect after breast conserving surgery. The purpose of this study is to verify the furthermore cosmetic value of CollatampTM in breast conserving surgery conducted in breast cancer patients. Methods: 17 patients were enrolled in this study, underwent breast conserving surgery with CollatampTM wrapped by SurgicelTM insertion, in Inje University Busan Paik Hospital from October 2015 to September 2016. Patient satisfaction, cosmetic outcome, results at 6 months from operation was analyzed to verify the effectiveness and usefulness of CollatampTM for cosmetics. Patient satisfaction was investigated through interviews on a scale of good, fair, poor, and the cosmetic outcome was investigated through physical examination by a surgeon who did not participate in the operations. Results: Among 17 patients, nine of them gave ‘good’ for patient satisfaction, eight gave ‘fair’ and none of them ‘poor’. Also, cosmetic outcome came out with 11 ‘good’s, six ‘fair’s, no ‘poor’. In ‘good’ patient satisfaction group, the mean value of resection to breast volume ratio was 16%, compared to 24% of ‘fair’ group. The mean value of actual resection volume was 100.6cm3, 102.7cm3 each. In ‘good’ cosmetic outcome group, the mean value of resection to breast volume ratio was 18%, compared to 23% of ‘fair’ group. The mean value of actual resection volume was 99.2cm3, 105.9cm3 respectively. According to these results, patient satisfaction and cosmetic outcome after surgeries were more reliable on the resection to breast volume ratio, rather than the actual resection volume. There were eight cases of postoperative complications, consisting of a lymphedema, a seroma, and six patients had mild pain. Conclusions: Cosmetic effect of CollatampTM in breast conserving surgery was more reliable on the resection to breast volume ratio, rather than the actual resection volume. In this short term survey, patients were tend to be satisfied with the cosmetics, all giving either good or fair scores. However, long term outcomes should be further assessed.

Keywords: breast cancer, breast conserving surgery, collatamp, cosmetics

Procedia PDF Downloads 251
1628 The Effects of Leadership on the Claim of Responsibility

Authors: Katalin Kovacs

Abstract:

In most forms of violence the perpetrators intend to hide their identities. Terrorism is different. Terrorist groups often take responsibility for their attacks, and consequently they reveal their identities. This unique characteristic of terrorism has been largely overlooked, and scholars are still puzzled as to why terrorist groups claim responsibility for their attacks. Certainly, the claim of responsibility is worth analysing. It would help to have a clearer picture of what terrorist groups try to achieve and how, but also to develop an understanding of the strategic planning of terrorist attacks and the message the terrorists intend to deliver. The research aims to answer the question why terrorist groups choose to claim responsibility for some of their attacks and not for others. In order to do so the claim of responsibility is considered to be a tactical choice, based on the assumption that terrorists weigh the costs and benefits of claiming responsibility. The main argument is that terrorist groups do not claim responsibility in cases when there is no tactical advantage gained from claiming responsibility. The idea that the claim of responsibility has tactical value offers the opportunity to test these assertions using a large scale empirical analysis. The claim of responsibility as a tactical choice depends on other tactical choices, such as the choice of target, the internationality of the attack, the number of victims and whether the group occupies territory or operates as an underground group. The structure of the terrorist groups and the level of decision making also affects the claim of responsibility. Terrorists on the lower level are less disciplined than the leaders. This means that the terrorists on lower levels pay less attention to the strategic objectives and engage easier in indiscriminate violence, and consequently they would less like to claim responsibility. Therefore, the research argues that terrorists, who are on a highest level of decision making would claim responsibility for the attacks as those are who takes into account the strategic objectives. As most studies on terrorism fail to provide definitions; therefore the researches are fragmented and incomparable. Separate, isolated researches do not support comprehensive thinking. It is also very important to note that there are only a few researches using quantitative methods. The aim of the research is to develop a new and comprehensive overview of the claim of responsibility based on strong quantitative evidence. By using well-established definitions and operationalisation the current research focuses on a broad range of attributes that can have tactical values in order to determine circumstances when terrorists are more likely to claim responsibility.

Keywords: claim of responsibility, leadership, tactical choice, terrorist group

Procedia PDF Downloads 308
1627 Participatory Cartography for Disaster Reduction in Pogreso, Yucatan Mexico

Authors: Gustavo Cruz-Bello

Abstract:

Progreso is a coastal community in Yucatan, Mexico, highly exposed to floods produced by severe storms and tropical cyclones. A participatory cartography approach was conducted to help to reduce floods disasters and assess social vulnerability within the community. The first step was to engage local authorities in risk management to facilitate the process. Two workshop were conducted, in the first, a poster size printed high spatial resolution satellite image of the town was used to gather information from the participants: eight women and seven men, among them construction workers, students, government employees and fishermen, their ages ranged between 23 and 58 years old. For the first task, participants were asked to locate emblematic places and place them in the image to familiarize with it. Then, they were asked to locate areas that get flooded, the buildings that they use as refuges, and to list actions that they usually take to reduce vulnerability, as well as to collectively come up with others that might reduce disasters. The spatial information generated at the workshops was digitized and integrated into a GIS environment. A printed version of the map was reviewed by local risk management experts, who validated feasibility of proposed actions. For the second workshop, we retrieved the information back to the community for feedback. Additionally a survey was applied in one household per block in the community to obtain socioeconomic, prevention and adaptation data. The information generated from the workshops was contrasted, through T and Chi Squared tests, with the survey data in order to probe the hypothesis that poorer or less educated people, are less prepared to face floods (more vulnerable) and live near or among higher presence of floods. Results showed that a great majority of people in the community are aware of the hazard and are prepared to face it. However, there was not a consistent relationship between regularly flooded areas with people’s average years of education, house services, or house modifications against heavy rains to be prepared to hazards. We could say that the participatory cartography intervention made participants aware of their vulnerability and made them collectively reflect about actions that can reduce disasters produced by floods. They also considered that the final map could be used as a communication and negotiation instrument with NGO and government authorities. It was not found that poorer and less educated people are located in areas with higher presence of floods.

Keywords: climate change, floods, Mexico, participatory mapping, social vulnerability

Procedia PDF Downloads 111
1626 Anatomical and Histochemical Investigation of the Leaf of Vitex agnus-castus L.

Authors: S. Mamoucha, J. Rahul, N. Christodoulakis

Abstract:

Introduction: Nature has been the source of medicinal agents since the dawn of the human existence on Earth. Currently, millions of people, in the developing world, rely on medicinal plants for primary health care, income generation and lifespan improvement. In Greece, more than 5500 plant taxa are reported while about 250 of them are considered to be of great pharmaceutical importance. Among the plants used for medical purposes, Vitex agnus-castus L. (Verbenaceae) is known since ancient times. It is a small tree or shrub, widely distributed in the Mediterranean basin up to the Central Asia. It is also known as chaste tree or monks pepper. Theophrastus mentioned the shrub several times, as ‘agnos’ in his ‘Enquiry into Plants’. Dioscorides mentioned the use of V. agnus-castus for the stimulation of lactation in nursing mothers and the treatment of several female disorders. The plant has important medicinal properties and a long tradition in folk medicine as an antimicrobial, diuretic, digestive and insecticidal agent. Materials and methods: Leaves were cleaned, detached, fixed, sectioned and investigated with light and Scanning Electron Microscopy (SEM). Histochemical tests were executed as well. Specific histochemical reagents (osmium tetroxide, H2SO4, vanillin/HCl, antimony trichloride, Wagner’ s reagent, Dittmar’ s reagent, potassium bichromate, nitroso reaction, ferric chloride and di methoxy benzaldehyde) were used for the sub cellular localization of secondary metabolites. Results: Light microscopical investigations of the elongated leaves of V. agnus-castus revealed three layers of palisade parenchyma, just below the single layered adaxial epidermis. The spongy parenchyma is rather loose. Adaxial epidermal cells are larger in magnitude, compared to those of the abaxial epidermis. Four different types of capitate, secreting trichomes, were localized among the abaxial epidermal cells. Stomata were observed at the abaxial epidermis as well. SEM revealed the interesting arrangement of trichomes. Histochemical treatment on fresh and plastic embedded tissue sections revealed the nature and the sites of secondary metabolites accumulation (flavonoids, steroids, terpenes). Acknowledgment: This work was supported by IKY - State Scholarship Foundation, Athens, Greece.

Keywords: Vitex agnus-castus, leaf anatomy, histochemical reagents, secondary metabolites

Procedia PDF Downloads 380
1625 Efficacy of Transcranial Magnetic Therapy on Balance in Patients with Vestibular Dysfunction

Authors: Ibrahim M. I. Hamoda, Ahmed R. Z. Baghdadi, Mohammed K. Mohamed, Nawal A. Abu-Shady

Abstract:

Background: Most of patients with vestibular dysfunction suffering from balance disorders, Abnormality in balance increase effort and exertion which affect the independency, so this study might be a guide in managing balance problem and consequently improve walking with less exertion and maximum function. Purpose: to analyze and discuss the effect of transcranial magnetic therapy on balance in patients with vestibular dysfunction. Methods: forty subjects from both sexes were classified to divided randomly into two equal groups; Group I study group: this group received transcranial magnetic therapy, with a selected physical therapy program for improving balance and vestibular disorders (Balance training, Cawthorne-Cooksey Exercises) and group II (control group): this group received a selected physical therapy program as group I without transcranial magnetic therapy. This treatment procedure will be applied three times weekly for three months. The mean age was 54.53±3.44 and 55.33±2.32 years and BMI 35.7±3.03 and 35.73±1.03 kg/m2 for group I and II respectively. The Biodex Balance System, Berge balances scale (BBS) and brain MRI were used for assessment. Assessments were conducted before and after treatment. The treatment program for group I included balance training, Cawthorne-Cooksey Exercises and pulsed magnetic therapy (Parameters used in the program of 20 minutes, Intensity 2 gausses, Frequency 1 Hz). This selected program was done in approximately one hour every other day for three month. The treatment program group II Patients received the same program as group A without transcranial magnetic therapy. Results: The One-way ANOVA revealed that there were no significant differences in BBS scores, overall balance index, Anterior / posterior balance index, Medial / lateral balance index and dynamic limits of stability between both groups. Moreover, the BBS scores increased and overall balance index, Anterior / posterior balance index, Medial / lateral balance index and dynamic limits of stability decreased significantly after treatment in group I and II compared with before treatment. Interpretation/Conclusion: Adding pulsed magnetic therapy to balance training, Cawthorne-Cooksey Exercises has no effect on static and dynamic balance in patients with balance problems due to benign positional paroxysmal vertigo.

Keywords: balance, transcranial magnetic therapy, vestibular dysfunction, biomechanic

Procedia PDF Downloads 477
1624 Semi-Empirical Modeling of Heat Inactivation of Enterococci and Clostridia During the Hygienisation in Anaerobic Digestion Process

Authors: Jihane Saad, Thomas Lendormi, Caroline Le Marechal, Anne-marie Pourcher, Céline Druilhe, Jean-louis Lanoiselle

Abstract:

Agricultural anaerobic digestion consists in the conversion of animal slurry and manure into biogas and digestate. They need, however, to be treated at 70 ºC during 60 min before anaerobic digestion according to the European regulation (EC n°1069/2009 & EU n°142/2011). The impact of such heat treatment on the outcome of bacteria has been poorly studied up to now. Moreover, a recent study¹ has shown that enterococci and clostridia are still detected despite the application of such thermal treatment, questioning the relevance of this approach for the hygienisation of digestate. The aim of this study is to establish the heat inactivation kinetics of two species of enterococci (Enterococcus faecalis and Enterococcus faecium) and two species of clostridia (Clostridioides difficile and Clostridium novyi as a non-toxic model for Clostridium botulinum of group III). A pure culture of each strain was prepared in a specific sterile medium at concentration of 10⁴ – 10⁷ MPN / mL (Most Probable number), depending on the bacterial species. Bacterial suspensions were then filled in sterilized capillary tubes and placed in a water or oil bath at desired temperature for a specific period of time. Each bacterial suspension was enumerated using a MPN approach, and tests were repeated three times for each temperature/time couple. The inactivation kinetics of the four indicator bacteria is described using the Weibull model and the classical Bigelow model of first-order kinetics. The Weibull model takes biological variation, with respect to thermal inactivation, into account and is basically a statistical model of distribution of inactivation times as the classical first-order approach is a special case of the Weibull model. The heat treatment at 70 ºC / 60 min contributes to a reduction greater than 5 log10 for E. faecium and E. faecalis. However, it results only in a reduction of about 0.7 log10 for C. difficile and an increase of 0.5 log10 for C. novyi. Application of treatments at higher temperatures is required to reach a reduction greater or equal to 3 log10 for C. novyi (such as 30 min / 100 ºC, 13 min / 105 ºC, 3 min / 110 ºC, and 1 min / 115 ºC), raising the question of the relevance of the application of heat treatment at 70 ºC / 60 min for these spore-forming bacteria. To conclude, the heat treatment (70 ºC / 60 min) defined by the European regulation is sufficient to inactivate non-sporulating bacteria. Higher temperatures (> 100 ºC) are required as far as spore-forming bacteria concerns to reach a 3 log10 reduction (sporicidal activity).

Keywords: heat treatment, enterococci, clostridia, inactivation kinetics

Procedia PDF Downloads 104
1623 Development of Taiwanese Sign Language Receptive Skills Test for Deaf Children

Authors: Hsiu Tan Liu, Chun Jung Liu

Abstract:

It has multiple purposes to develop a sign language receptive skills test. For example, this test can be used to be an important tool for education and to understand the sign language ability of deaf children. There is no available test for these purposes in Taiwan. Through the discussion of experts and the references of standardized Taiwanese Sign Language Receptive Test for adults and adolescents, the frame of Taiwanese Sign Language Receptive Skills Test (TSL-RST) for deaf children was developed, and the items were further designed. After multiple times of pre-trials, discussions and corrections, TSL-RST is finally developed which can be conducted and scored online. There were 33 deaf children who agreed to be tested from all three deaf schools in Taiwan. Through item analysis, the items were picked out that have good discrimination index and fair difficulty index. Moreover, psychometric indexes of reliability and validity were established. Then, derived the regression formula was derived which can predict the sign language receptive skills of deaf children. The main results of this study are as follows. (1). TSL-RST includes three sub-test of vocabulary comprehension, syntax comprehension and paragraph comprehension. There are 21, 20, and 9 items in vocabulary comprehension, syntax comprehension, and paragraph comprehension, respectively. (2). TSL-RST can be conducted individually online. The sign language ability of deaf students can be calculated fast and objectively, so that they can get the feedback and results immediately. This can also contribute to both teaching and research. The most subjects can complete the test within 25 minutes. While the test procedure, they can answer the test questions without relying on their reading ability or memory capacity. (3). The sub-test of the vocabulary comprehension is the easiest one, syntax comprehension is harder than vocabulary comprehension and the paragraph comprehension is the hardest. Each of the three sub-test and the whole test are good in item discrimination index. (4). The psychometric indices are good, including the internal consistency reliability (Cronbach’s α coefficient), test-retest reliability, split-half reliability, and content validity. The sign language ability are significantly related to non-verbal IQ, the teachers’ rating to the students’ sign language ability and students’ self-rating to their own sign language ability. The results showed that the higher grade students have better performance than the lower grade students, and students with deaf parent perform better than those with hearing parent. These results made TLS-RST have great discriminant validity. (5). The predictors of sign language ability of primary deaf students are age and years of starting to learn sign language. The results of this study suggested that TSL-RST can effectively assess deaf student’s sign language ability. This study also proposed a model to develop a sign language tests.

Keywords: comprehension test, elementary school, sign language, Taiwan sign language

Procedia PDF Downloads 183
1622 Systematic Evaluation of Convolutional Neural Network on Land Cover Classification from Remotely Sensed Images

Authors: Eiman Kattan, Hong Wei

Abstract:

In using Convolutional Neural Network (CNN) for classification, there is a set of hyperparameters available for the configuration purpose. This study aims to evaluate the impact of a range of parameters in CNN architecture i.e. AlexNet on land cover classification based on four remotely sensed datasets. The evaluation tests the influence of a set of hyperparameters on the classification performance. The parameters concerned are epoch values, batch size, and convolutional filter size against input image size. Thus, a set of experiments were conducted to specify the effectiveness of the selected parameters using two implementing approaches, named pertained and fine-tuned. We first explore the number of epochs under several selected batch size values (32, 64, 128 and 200). The impact of kernel size of convolutional filters (1, 3, 5, 7, 10, 15, 20, 25 and 30) was evaluated against the image size under testing (64, 96, 128, 180 and 224), which gave us insight of the relationship between the size of convolutional filters and image size. To generalise the validation, four remote sensing datasets, AID, RSD, UCMerced and RSCCN, which have different land covers and are publicly available, were used in the experiments. These datasets have a wide diversity of input data, such as number of classes, amount of labelled data, and texture patterns. A specifically designed interactive deep learning GPU training platform for image classification (Nvidia Digit) was employed in the experiments. It has shown efficiency in both training and testing. The results have shown that increasing the number of epochs leads to a higher accuracy rate, as expected. However, the convergence state is highly related to datasets. For the batch size evaluation, it has shown that a larger batch size slightly decreases the classification accuracy compared to a small batch size. For example, selecting the value 32 as the batch size on the RSCCN dataset achieves the accuracy rate of 90.34 % at the 11th epoch while decreasing the epoch value to one makes the accuracy rate drop to 74%. On the other extreme, setting an increased value of batch size to 200 decreases the accuracy rate at the 11th epoch is 86.5%, and 63% when using one epoch only. On the other hand, selecting the kernel size is loosely related to data set. From a practical point of view, the filter size 20 produces 70.4286%. The last performed image size experiment shows a dependency in the accuracy improvement. However, an expensive performance gain had been noticed. The represented conclusion opens the opportunities toward a better classification performance in various applications such as planetary remote sensing.

Keywords: CNNs, hyperparamters, remote sensing, land cover, land use

Procedia PDF Downloads 164
1621 Calculation of Pressure-Varying Langmuir and Brunauer-Emmett-Teller Isotherm Adsorption Parameters

Authors: Trevor C. Brown, David J. Miron

Abstract:

Gas-solid physical adsorption methods are central to the characterization and optimization of the effective surface area, pore size and porosity for applications such as heterogeneous catalysis, and gas separation and storage. Properties such as adsorption uptake, capacity, equilibrium constants and Gibbs free energy are dependent on the composition and structure of both the gas and the adsorbent. However, challenges remain, in accurately calculating these properties from experimental data. Gas adsorption experiments involve measuring the amounts of gas adsorbed over a range of pressures under isothermal conditions. Various constant-parameter models, such as Langmuir and Brunauer-Emmett-Teller (BET) theories are used to provide information on adsorbate and adsorbent properties from the isotherm data. These models typically do not provide accurate interpretations across the full range of pressures and temperatures. The Langmuir adsorption isotherm is a simple approximation for modelling equilibrium adsorption data and has been effective in estimating surface areas and catalytic rate laws, particularly for high surface area solids. The Langmuir isotherm assumes the systematic filling of identical adsorption sites to a monolayer coverage. The BET model is based on the Langmuir isotherm and allows for the formation of multiple layers. These additional layers do not interact with the first layer and the energetics are equal to the adsorbate as a bulk liquid. This BET method is widely used to measure the specific surface area of materials. Both Langmuir and BET models assume that the affinity of the gas for all adsorption sites are identical and so the calculated adsorbent uptake at the monolayer and equilibrium constant are independent of coverage and pressure. Accurate representations of adsorption data have been achieved by extending the Langmuir and BET models to include pressure-varying uptake capacities and equilibrium constants. These parameters are determined using a novel regression technique called flexible least squares for time-varying linear regression. For isothermal adsorption the adsorption parameters are assumed to vary slowly and smoothly with increasing pressure. The flexible least squares for pressure-varying linear regression (FLS-PVLR) approach assumes two distinct types of discrepancy terms, dynamic and measurement for all parameters in the linear equation used to simulate the data. Dynamic terms account for pressure variation in successive parameter vectors, and measurement terms account for differences between observed and theoretically predicted outcomes via linear regression. The resultant pressure-varying parameters are optimized by minimizing both dynamic and measurement residual squared errors. Validation of this methodology has been achieved by simulating adsorption data for n-butane and isobutane on activated carbon at 298 K, 323 K and 348 K and for nitrogen on mesoporous alumina at 77 K with pressure-varying Langmuir and BET adsorption parameters (equilibrium constants and uptake capacities). This modeling provides information on the adsorbent (accessible surface area and micropore volume), adsorbate (molecular areas and volumes) and thermodynamic (Gibbs free energies) variations of the adsorption sites.

Keywords: Langmuir adsorption isotherm, BET adsorption isotherm, pressure-varying adsorption parameters, adsorbate and adsorbent properties and energetics

Procedia PDF Downloads 226
1620 Modern Methods of Construction (MMC): The Potentials and Challenges of Using Prefabrication Technology for Building Modern Houses in Afghanistan

Authors: Latif Karimi, Yasuhide Mochida

Abstract:

The purpose of this paper is to study Modern Methods of Construction (MMC); specifically, the prefabrication technology and check the applicability, suitability, and benefits of this construction technique over conventional methods for building new houses in Afghanistan. Construction industry and house building sector are a key contributor to Afghanistan’s economy. However, this sector is challenged with lack of innovation and severe impacts that it has on the environment due to huge amount of construction waste from building, demolition and or renovation activities. This paper studies the prefabrication technology, a popular MMC that is becoming more common, improving in quality and being available in a variety of budgets. Several feasibility studies worldwide have revealed that this method is the way forward in improving construction industry performance as it has been proven to reduce construction time, construction wastes and improve the environmental performance of the construction processes. In addition, this study emphasizes on 'sustainability' in-house building, since it is a common challenge in housing construction projects on a global scale. This challenge becomes more severe in the case of under-developed countries, like Afghanistan. Because, most of the houses are being built in the absence of a serious quality control mechanism and dismissive to basic requirements of sustainable houses; well-being, cost-effectiveness, minimization - prevention of wastes production during construction and use, and severe environmental impacts in view of a life cycle assessment. Methodology: A literature review and study of the conventional practices of building houses in urban areas of Afghanistan. A survey is also being completed to study the potentials and challenges of using prefabrication technology for building modern houses in the cities across the country. A residential housing project is selected for case study to determine the drawbacks of current construction methods vs. prefabrication technique for building a new house. Originality: There are little previous research available about MMC considering its specific impacts on sustainability related to house building practices. This study will be specifically of interest to a broad range of people, including planners, construction managers, builders, and house owners.

Keywords: modern methods of construction (MMC), prefabrication, prefab houses, sustainable construction, modern houses

Procedia PDF Downloads 240
1619 Self-Assembling Layered Double Hydroxide Nanosheets on β-FeOOH Nanorods for Reducing Fire Hazards of Epoxy Resin

Authors: Wei Wang, Yuan Hu

Abstract:

Epoxy resins (EP), one of the most important thermosetting polymers, is widely applied in various fields due to its desirable properties, such as excellent electrical insulation, low shrinkage, outstanding mechanical stiffness, satisfactory adhesion and solvent resistance. However, like most of the polymeric materials, EP has the fatal drawbacks including inherent flammability and high yield of toxic smoke, which restricts its application in the fields requiring fire safety. So, it is still a challenge and an interesting subject to develop new flame retardants which can not only remarkably improve the flame retardancy, but also render modified resins low toxic gases generation. In recent work, polymer nanocomposites based on nanohybrids that contain two or more kinds of nanofillers have drawn intensive interest, which can realize performance enhancements. The realization of previous hybrids of carbon nanotubes (CNTs) and molybdenum disulfide provides us a novel route to decorate layered double hydroxide (LDH) nanosheets on the surface of β-FeOOH nanorods; the deposited LDH nanosheets can fill the network and promote the work efficiency of β-FeOOH nanorods. Moreover, the synergistic effects between LDH and β-FeOOH can be anticipated to have potential applications in reducing fire hazards of EP composites for the combination of condense-phase and gas-phase mechanism. As reported, β-FeOOH nanorods can act as a core to prepare hybrid nanostructures combining with other nanoparticles through electrostatic attraction through layer-by-layer assembly technique. In this work, LDH nanosheets wrapped β-FeOOH nanorods (LDH-β-FeOOH) hybrids was synthesized by a facile method, with the purpose of combining the characteristics of one dimension (1D) and two dimension (2D), to improve the fire resistance of epoxy resin. The hybrids showed a well dispersion in EP matrix and had no obvious aggregation. Thermogravimetric analysis and cone calorimeter tests confirmed that LDH-β-FeOOH hybrids into EP matrix with a loading of 3% could obviously improve the fire safety of EP composites. The plausible flame retardancy mechanism was explored by thermogravimetric infrared (TG-IR) and X-ray photoelectron spectroscopy. The reasons were concluded: condense-phase and gas-phase. Nanofillers were transferred to the surface of matrix during combustion, which could not only shield EP matrix from external radiation and heat feedback from the fire zone, but also efficiently retard transport of oxygen and flammable pyrolysis.

Keywords: fire hazards, toxic gases, self-assembly, epoxy

Procedia PDF Downloads 168
1618 Tunable Graphene Metasurface Modeling Using the Method of Moment Combined with Generalised Equivalent Circuit

Authors: Imen Soltani, Takoua Soltani, Taoufik Aguili

Abstract:

Metamaterials crossover classic physical boundaries and gives rise to new phenomena and applications in the domain of beam steering and shaping. Where electromagnetic near and far field manipulations were achieved in an accurate manner. In this sense, 3D imaging is one of the beneficiaries and in particular Denis Gabor’s invention: holography. But, the major difficulty here is the lack of a suitable recording medium. So some enhancements were essential, where the 2D version of bulk metamaterials have been introduced the so-called metasurface. This new class of interfaces simplifies the problem of recording medium with the capability of tuning the phase, amplitude, and polarization at a given frequency. In order to achieve an intelligible wavefront control, the electromagnetic properties of the metasurface should be optimized by means of solving Maxwell’s equations. In this context, integral methods are emerging as an important method to study electromagnetic from microwave to optical frequencies. The method of moment presents an accurate solution to reduce the problem of dimensions by writing its boundary conditions in the form of integral equations. But solving this kind of equations tends to be more complicated and time-consuming as the structural complexity increases. Here, the use of equivalent circuit’s method exhibits the most scalable experience to develop an integral method formulation. In fact, for allaying the resolution of Maxwell’s equations, the method of Generalised Equivalent Circuit was proposed to convey the resolution from the domain of integral equations to the domain of equivalent circuits. In point of fact, this technique consists in creating an electric image of the studied structure using discontinuity plan paradigm and taken into account its environment. So that, the electromagnetic state of the discontinuity plan is described by generalised test functions which are modelled by virtual sources not storing energy. The environmental effects are included by the use of an impedance or admittance operator. Here, we propose a tunable metasurface composed of graphene-based elements which combine the advantages of reflectarrays concept and graphene as a pillar constituent element at Terahertz frequencies. The metasurface’s building block consists of a thin gold film, a dielectric spacer SiO₂ and graphene patch antenna. Our electromagnetic analysis is based on the method of moment combined with generalised equivalent circuit (MoM-GEC). We begin by restricting our attention to study the effects of varying graphene’s chemical potential on the unit cell input impedance. So, it was found that the variation of complex conductivity of graphene allows controlling the phase and amplitude of the reflection coefficient at each element of the array. From the results obtained here, we were able to determine that the phase modulation is realized by adjusting graphene’s complex conductivity. This modulation is a viable solution compared to tunning the phase by varying the antenna length because it offers a full 2π reflection phase control.

Keywords: graphene, method of moment combined with generalised equivalent circuit, reconfigurable metasurface, reflectarray, terahertz domain

Procedia PDF Downloads 173
1617 Evaluation of Public Library Adult Programs: Use of Servqual and Nippa Assessment Standards

Authors: Anna Ching-Yu Wong

Abstract:

This study aims to identify the quality and effectiveness of the adult programs provided by the public library using the ServQUAL Method and the National Library Public Programs Assessment guidelines (NIPPA, June 2019). ServQUAl covers several variables, namely: tangible, reliability, responsiveness, assurance, and empathy. NIPPA guidelines focus on program characteristics, particularly on the outcomes – the level of satisfaction from program participants. The reached populations were adults who participated in library adult programs at a small-town public library in Kansas. This study was designed as quantitative evaluative research which analyzed the quality and effectiveness of the library adult programs by analyzing the role of each factor based on ServQUAL and the NIPPA's library program assessment guidelines. Data were collected from November 2019 to January 2020 using a questionnaire with a Likert Scale. The data obtained were analyzed in a descriptive quantitative manner. The impact of this research can provide information about the quality and effectiveness of existing programs and can be used as input to develop strategies for developing future adult programs. Overall the result of ServQUAL measurement is in very good quality, but still, areas need improvement and emphasis in each variable: Tangible Variables still need improvement in indicators of the temperature and space of the meeting room. Reliability Variable still needs improvement in the timely delivery of the programs. Responsiveness Variable still needs improvement in terms of the ability of the presenters to convey trust and confidence from participants. Assurance Variables still need improvement in the indicator of knowledge and skills of program presenters. Empathy Variable still needs improvement in terms of the presenters' willingness to provide extra assistance. The result of program outcomes measurement based on NIPPA guidelines is very positive. Over 96% of participants indicated that the programs were informative and fun. They learned new knowledge and new skills and would recommend the programs to their friends and families. They believed that together, the library and participants build stronger and healthier communities.

Keywords: ServQual model, ServQual in public libraries, library program assessment, NIPPA library programs assessment

Procedia PDF Downloads 92
1616 Fine Needle Aspiration Biopsy of Thyroid Nodules

Authors: Ilirian Laçi, Alketa Spahiu

Abstract:

Big strums of thyroid glandule observed by a simple viewing can be witnessed in everyday life. Medical cabinets evidence patients withpalpablenodes of thyroid glandule, mainly nodes of the size of 10 millimeters. Further, more cases which have resulted in negative under palpation have resulted in positive at ultrasound examination. Therefore, the use of ultrasound for diagnosing has increased the number of patients with nodes of thyroid glandule in the last couple of decades in all countries, Albania included. Thus, there has been evidence of an increased number of patients affected by this pathology, where female patients dominate. Demographically, the capital shows high numbers due to the high population, but of interest is the high incidence of those areas distanced from the sea. While regarding related pathologies, no significant link was evidenced, an element of ancestry was evident in the nodes of the thyroid glandule. When we talk of nodes of the thyroid glandule, we should consider hyperplasia, neoplasia, and inflammatory diseases that cause nodes of the thyroid glandule. This increase parallels the world’s increase of the incidence of thyroid glandule, with malign cases, which are at about 5% and are not depended on size. Given the numbers, with most thyroid glandule nodes being benign, the main objective of the examination of the nodes was the determination of benign and malign cases to avoid undue surgery. Subject of this study were 212 patients that underwent fine-needle aspiration (FNA) under ultrasound guidance at the Medical University Center of Tirana. All the patients came to the Mother Teresa University Hospital from public and private hospitals and other polyclinics. These patients had an ultrasound examination before visiting the Center of Nuclear Medicine for a scintigraph of thyroid glandule in the period September 2016 and September 2017. To correlate, all patients had been examined via ultrasound of the thyroid glandule prior to the scintigraph. The ultrasound included evaluation of the number of nodes, their size, their solid, cystic, or solid-cystic structure, echogenicity according to the gray scale, the presence of calcification, the presence of lymph nodes, the presence of adenopathy, and the correlation of the cytology results from the Laboratory of Pathological Anatomy of Medical University Center of Tirana.

Keywords: thyroid nodes, fine needle aspiration, ultrasound, scintigraphy

Procedia PDF Downloads 98
1615 O-Functionalized CNT Mediated CO Hydro-Deoxygenation and Chain Growth

Authors: K. Mondal, S. Talapatra, M. Terrones, S. Pokhrel, C. Frizzel, B. Sumpter, V. Meunier, A. L. Elias

Abstract:

Worldwide energy independence is reliant on the ability to leverage locally available resources for fuel production. Recently, syngas produced through gasification of carbonaceous materials provided a gateway to a host of processes for the production of various chemicals including transportation fuels. The basis of the production of gasoline and diesel-like fuels is the Fischer Tropsch Synthesis (FTS) process: A catalyzed chemical reaction that converts a mixture of carbon monoxide (CO) and hydrogen (H2) into long chain hydrocarbons. Until now, it has been argued that only transition metal catalysts (usually Co or Fe) are active toward the CO hydrogenation and subsequent chain growth in the presence of hydrogen. In this paper, we demonstrate that carbon nanotube (CNT) surfaces are also capable of hydro-deoxygenating CO and producing long chain hydrocarbons similar to that obtained through the FTS but with orders of magnitude higher conversion efficiencies than the present state-of-the-art FTS catalysts. We have used advanced experimental tools such as XPS and microscopy techniques to characterize CNTs and identify C-O functional groups as the active sites for the enhanced catalytic activity. Furthermore, we have conducted quantum Density Functional Theory (DFT) calculations to confirm that C-O groups (inherent on CNT surfaces) could indeed be catalytically active towards reduction of CO with H2, and capable of sustaining chain growth. The DFT calculations have shown that the kinetically and thermodynamically feasible route for CO insertion and hydro-deoxygenation are different from that on transition metal catalysts. Experiments on a continuous flow tubular reactor with various nearly metal-free CNTs have been carried out and the products have been analyzed. CNTs functionalized by various methods were evaluated under different conditions. Reactor tests revealed that the hydrogen pre-treatment reduced the activity of the catalysts to negligible levels. Without the pretreatment, the activity for CO conversion as found to be 7 µmol CO/g CNT/s. The O-functionalized samples showed very activities greater than 85 µmol CO/g CNT/s with nearly 100% conversion. Analyses show that CO hydro-deoxygenation occurred at the C-O/O-H functional groups. It was found that while the products were similar to FT products, differences in selectivities were observed which, in turn, was a result of a different catalytic mechanism. These findings now open a new paradigm for CNT-based hydrogenation catalysts and constitute a defining point for obtaining clean, earth abundant, alternative fuels through the use of efficient and renewable catalyst.

Keywords: CNT, CO Hydrodeoxygenation, DFT, liquid fuels, XPS, XTL

Procedia PDF Downloads 338
1614 Weakly Non-Linear Stability Analysis of Newtonian Liquids and Nanoliquids in Shallow, Square and Tall High-Porosity Enclosures

Authors: Pradeep G. Siddheshwar, K. M. Lakshmi

Abstract:

The present study deals with weakly non-linear stability analysis of Rayleigh-Benard-Brinkman convection in nanoliquid-saturated porous enclosures. The modified-Buongiorno-Brinkman model (MBBM) is used for the conservation of linear momentum in a nanoliquid-saturated-porous medium under the assumption of Boussinesq approximation. Thermal equilibrium is imposed between the base liquid and the nanoparticles. The thermophysical properties of nanoliquid are modeled using phenomenological laws and mixture theory. The fifth-order Lorenz model is derived for the problem and is then reduced to the first-order Ginzburg-Landau equation (GLE) using the multi-scale method. The analytical solution of the GLE for the amplitude is then used to quantify the heat transport in closed form, in terms of the Nusselt number. It is found that addition of dilute concentration of nanoparticles significantly enhances the heat transport and the dominant reason for the same is the high thermal conductivity of the nanoliquid in comparison to that of the base liquid. This aspect of nanoliquids helps in speedy removal of heat. The porous medium serves the purpose of retainment of energy in the system due to its low thermal conductivity. The present model helps in making a unified study for obtaining the results for base liquid, nanoliquid, base liquid-saturated porous medium and nanoliquid-saturated porous medium. Three different types of enclosures are considered for the study by taking different values of aspect ratio, and it is observed that heat transport in tall porous enclosure is maximum while that of shallow is the least. Detailed discussion is also made on estimating heat transport for different volume fractions of nanoparticles. Results of single-phase model are shown to be a limiting case of the present study. The study is made for three boundary combinations, viz., free-free, rigid-rigid and rigid-free.

Keywords: Boungiorno model, Ginzburg-Landau equation, Lorenz equations, porous medium

Procedia PDF Downloads 318
1613 Arterial Compliance Measurement Using Split Cylinder Sensor/Actuator

Authors: Swati Swati, Yuhang Chen, Robert Reuben

Abstract:

Coronary stents are devices resembling the shape of a tube which are placed in coronary arteries, to keep the arteries open in the treatment of coronary arterial diseases. Coronary stents are routinely deployed to clear atheromatous plaque. The stent essentially applies an internal pressure to the artery because its structure is cylindrically symmetrical and this may introduce some abnormalities in final arterial shape. The goal of the project is to develop segmented circumferential arterial compliance measuring devices which can be deployed (eventually) in vivo. The segmentation of the device will allow the mechanical asymmetry of any stenosis to be assessed. The purpose will be to assess the quality of arterial tissue for applications in tailored stents and in the assessment of aortic aneurism. Arterial distensibility measurement is of utmost importance to diagnose cardiovascular diseases and for prediction of future cardiac events or coronary artery diseases. In order to arrive at some generic outcomes, a preliminary experimental set-up has been devised to establish the measurement principles for the device at macro-scale. The measurement methodology consists of a strain gauge system monitored by LABVIEW software in a real-time fashion. This virtual instrument employs a balloon within a gelatine model contained in a split cylinder with strain gauges fixed on it. The instrument allows automated measurement of the effect of air-pressure on gelatine and measurement of strain with respect to time and pressure during inflation. Compliance simple creep model has been applied to the results for the purpose of extracting some measures of arterial compliance. The results obtained from the experiments have been used to study the effect of air pressure on strain at varying time intervals. The results clearly demonstrate that with decrease in arterial volume and increase in arterial pressure, arterial strain increases thereby decreasing the arterial compliance. The measurement system could lead to development of portable, inexpensive and small equipment and could prove to be an efficient automated compliance measurement device.

Keywords: arterial compliance, atheromatous plaque, mechanical symmetry, strain measurement

Procedia PDF Downloads 275