Search results for: intelligence cycle
301 Mapping Intertidal Changes Using Polarimetry and Interferometry Techniques
Authors: Khalid Omari, Rene Chenier, Enrique Blondel, Ryan Ahola
Abstract:
Northern Canadian coasts have vulnerable and very dynamic intertidal zones with very high tides occurring in several areas. The impact of climate change presents challenges not only for maintaining this biodiversity but also for navigation safety adaptation due to the high sediment mobility in these coastal areas. Thus, frequent mapping of shorelines and intertidal changes is of high importance. To help in quantifying the changes in these fragile ecosystems, remote sensing provides practical monitoring tools at local and regional scales. Traditional methods based on high-resolution optical sensors are often used to map intertidal areas by benefiting of the spectral response contrast of intertidal classes in visible, near and mid-infrared bands. Tidal areas are highly reflective in visible bands mainly because of the presence of fine sand deposits. However, getting a cloud-free optical data that coincide with low tides in intertidal zones in northern regions is very difficult. Alternatively, the all-weather capability and daylight-independence of the microwave remote sensing using synthetic aperture radar (SAR) can offer valuable geophysical parameters with a high frequency revisit over intertidal zones. Multi-polarization SAR parameters have been used successfully in mapping intertidal zones using incoherence target decomposition. Moreover, the crustal displacements caused by ocean tide loading may reach several centimeters that can be detected and quantified across differential interferometric synthetic aperture radar (DInSAR). Soil moisture change has a significant impact on both the coherence and the backscatter. For instance, increases in the backscatter intensity associated with low coherence is an indicator for abrupt surface changes. In this research, we present primary results obtained following our investigation of the potential of the fully polarimetric Radarsat-2 data for mapping an inter-tidal zone located on Tasiujaq on the south-west shore of Ungava Bay, Quebec. Using the repeat pass cycle of Radarsat-2, multiple seasonal fine quad (FQ14W) images are acquired over the site between 2016 and 2018. Only 8 images corresponding to low tide conditions are selected and used to build an interferometric stack of data. The observed displacements along the line of sight generated using HH and VV polarization are compared with the changes noticed using the Freeman Durden polarimetric decomposition and Touzi degree of polarization extrema. Results show the consistency of both approaches in their ability to monitor the changes in intertidal zones.Keywords: SAR, degree of polarization, DInSAR, Freeman-Durden, polarimetry, Radarsat-2
Procedia PDF Downloads 137300 Acute Neurophysiological Responses to Resistance Training; Evidence of a Shortened Super Compensation Cycle and Early Neural Adaptations
Authors: Christopher Latella, Ashlee M. Hendy, Dan Vander Westhuizen, Wei-Peng Teo
Abstract:
Introduction: Neural adaptations following resistance training interventions have been widely investigated, however the evidence regarding the mechanisms of early adaptation are less clear. Understanding neural responses from an acute resistance training session is pivotal in the prescription of frequency, intensity and volume in applied strength and conditioning practice. Therefore the primary aim of this study was to investigate the time course of neurophysiological mechanisms post training against current super compensation theory, and secondly, to examine whether these responses reflect neural adaptations observed with resistance training interventions. Methods: Participants (N=14) completed a randomised, counterbalanced crossover study comparing; control, strength and hypertrophy conditions. The strength condition involved 3 x 5RM leg extensions with 3min recovery, while the hypertrophy condition involved 3 x 12 RM with 60s recovery. Transcranial magnetic stimulation (TMS) and peripheral nerve stimulation were used to measure excitability of the central and peripheral neural pathways, and maximal voluntary contraction (MVC) to quantify strength changes. Measures were taken pre, immediately post, 10, 20 and 30 mins and 1, 2, 6, 24, 48, 72 and 96 hrs following training. Results: Significant decreases were observed at post, 10, 20, 30 min, 1 and 2 hrs for both training groups compared to control group for force, (p <.05), maximal compound wave; (p < .005), silent period; (p < .05). A significant increase in corticospinal excitability; (p < .005) was observed for both groups. Corticospinal excitability between strength and hypertrophy groups was near significance, with a large effect (η2= .202). All measures returned to baseline within 6 hrs post training. Discussion: Neurophysiological mechanisms appear to be significantly altered in the period 2 hrs post training, returning to homeostasis by 6 hrs. The evidence suggests that the time course of neural recovery post resistance training occurs 18-40 hours shorter than previous super compensation models. Strength and hypertrophy protocols showed similar response profiles with current findings suggesting greater post training corticospinal drive from hypertrophy training, despite previous evidence that strength training requires greater neural input. The increase in corticospinal drive and decrease inl inhibition appear to be a compensatory mechanism for decreases in peripheral nerve excitability and maximal voluntary force output. The changes in corticospinal excitability and inhibition are akin to adaptive processes observed with training interventions of 4 wks or longer. It appears that the 2 hr recovery period post training is the most influential for priming further neural adaptations with resistance training. Secondly, the frequency of prescribed resistance sessions can be scheduled closer than previous super compensation theory for optimal strength gains.Keywords: neural responses, resistance training, super compensation, transcranial magnetic stimulation
Procedia PDF Downloads 284299 Polymer Nanocomposite Containing Silver Nanoparticles for Wound Healing
Authors: Patrícia Severino, Luciana Nalone, Daniele Martins, Marco Chaud, Classius Ferreira, Cristiane Bani, Ricardo Albuquerque
Abstract:
Hydrogels produced with polymers have been used in the development of dressings for wound treatment and tissue revitalization. Our study on polymer nanocomposites containing silver nanoparticles shows antimicrobial activity and applications in wound healing. The effects are linked with the slow oxidation and Ag⁺ liberation to the biological environment. Furthermore, bacterial cell membrane penetration and metabolic disruption through cell cycle disarrangement also contribute to microbial cell death. The silver antimicrobial activity has been known for many years, and previous reports show that low silver concentrations are safe for human use. This work aims to develop a hydrogel using natural polymers (sodium alginate and gelatin) combined with silver nanoparticles for wound healing and with antimicrobial properties in cutaneous lesions. The hydrogel development utilized different sodium alginate and gelatin proportions (20:80, 50:50 and 80:20). The silver nanoparticles incorporation was evaluated at the concentrations of 1.0, 2.0 and 4.0 mM. The physico-chemical properties of the formulation were evaluated using ultraviolet-visible (UV-Vis) absorption spectroscopy, Fourier transform infrared (FTIR) spectroscopy, differential scanning calorimetry (DSC), and thermogravimetric (TG) analysis. The morphological characterization was made using transmission electron microscopy (TEM). Human fibroblast (L2929) viability assay was performed with a minimum inhibitory concentration (MIC) assessment as well as an in vivo cicatrizant test. The results suggested that sodium alginate and gelatin in the (80:20) proportion with 4 mM of AgNO₃ in the (UV-Vis) exhibited a better hydrogel formulation. The nanoparticle absorption spectra of this analysis showed a maximum band around 430 - 450 nm, which suggests a spheroidal form. The TG curve exhibited two weight loss events. DSC indicated one endothermic peak at 230-250 °C, due to sample fusion. The polymers acted as stabilizers of a nanoparticle, defining their size and shape. Human fibroblast viability assay L929 gave 105 % cell viability with a negative control, while gelatin presented 96% viability, alginate: gelatin (80:20) 96.66 %, and alginate 100.33 % viability. The sodium alginate:gelatin (80:20) exhibited significant antimicrobial activity, with minimal bacterial growth at a ratio of 1.06 mg.mL⁻¹ in Pseudomonas aeruginosa and 0.53 mg.mL⁻¹ in Staphylococcus aureus. The in vivo results showed a significant reduction in wound surface area. On the seventh day, the hydrogel-nanoparticle formulation reduced the total area of injury by 81.14 %, while control reached a 45.66 % reduction. The results suggest that silver-hydrogel nanoformulation exhibits potential for wound dressing therapeutics.Keywords: nanocomposite, wound healing, hydrogel, silver nanoparticle
Procedia PDF Downloads 101298 Investigation of Xanthomonas euvesicatoria on Seed Germination and Seed to Seedling Transmission in Tomato
Authors: H. Mayton, X. Yan, A. G. Taylor
Abstract:
Infested tomato seeds were used to investigate the influence of Xanthomonas euvesicatoria on germination and seed to seedling transmission in a controlled environment and greenhouse assays in an effort to develop effective seed treatments and characterize seed borne transmission of bacterial leaf spot of tomato. Bacterial leaf spot of tomato, caused by four distinct Xanthomonas species, X. euvesicatoria, X. gardneri, X. perforans, and X. vesicatoria, is a serious disease worldwide. In the United States, disease prevention is expensive for commercial growers in warm, humid regions of the country, and crop losses can be devastating. In this study, four different infested tomato seed lots were extracted from tomato fruits infected with bacterial leaf spot from a field in New York State in 2017 that had been inoculated with X. euvesicatoria. In addition, vacuum infiltration at 61 kilopascals for 1, 5, 10, and 15 minutes and seed soaking for 5, 10, 15, and 30 minutes with different bacterial concentrations were used to artificially infest seed in the laboratory. For controlled environment assays, infested tomato seeds from the field and laboratory were placed othe n moistened blue blotter in square plastic boxes (10 cm x 10 cm) and incubated at 20/30 ˚C with an 8/16 hour light cycle, respectively. Infested tomato seeds from the field and laboratory were also planted in small plastic trays in soil (peat-lite medium) and placed in the greenhouse with 24/18 ˚C day and night temperatures, respectively, with a 14-hour photoperiod. Seed germination was assessed after eight days in the laboratory and 14 days in the greenhouse. Polymerase chain reaction (PCR) using the hrpB7 primers (RST65 [5’- GTCGTCGTTACGGCAAGGTGGTG-3’] and RST69 [5’-TCGCCCAGCGTCATCAGGCCATC-3’]) was performed to confirm presence or absence of the bacterial pathogen in seed lots collected from the field and in germinating seedlings in all experiments. For infested seed lots from the field, germination was lowest (84%) in the seed lot with the highest level of bacterial infestation (55%) and ranged from 84-98%. No adverse effect on germination was observed from artificially infested seeds for any bacterial concentration and method of infiltration when compared to a non-infested control. Germination in laboratory assays for artificially infested seeds ranged from 82-100%. In controlled environment assays, 2.5 % were PCR positive for the pathogen, and in the greenhouse assays, no infected seedlings were detected. From these experiments, X. euvesicatoria does not appear to adversely influence germination. The lowest rate of germination from field collected seed may be due to contamination with multiple pathogens and saprophytic organisms as no effect of artificial bacterial seed infestation in the laboratory on germination was observed. No evidence of systemic movement from seed to seedling was observed in the greenhouse assays; however, in the controlled environment assays, some seedlings were PCR positive. Additional experiments are underway with green fluorescent protein-expressing isolates to further characterize seed to seedling transmission of the bacterial leaf spot pathogen in tomato.Keywords: bacterial leaf spot, seed germination, tomato, Xanthomonas euvesicatoria
Procedia PDF Downloads 135297 Facial Recognition of University Entrance Exam Candidates using FaceMatch Software in Iran
Authors: Mahshid Arabi
Abstract:
In recent years, remarkable advancements in the fields of artificial intelligence and machine learning have led to the development of facial recognition technologies. These technologies are now employed in a wide range of applications, including security, surveillance, healthcare, and education. In the field of education, the identification of university entrance exam candidates has been one of the fundamental challenges. Traditional methods such as using ID cards and handwritten signatures are not only inefficient and prone to fraud but also susceptible to errors. In this context, utilizing advanced technologies like facial recognition can be an effective and efficient solution to increase the accuracy and reliability of identity verification in entrance exams. This article examines the use of FaceMatch software for recognizing the faces of university entrance exam candidates in Iran. The main objective of this research is to evaluate the efficiency and accuracy of FaceMatch software in identifying university entrance exam candidates to prevent fraud and ensure the authenticity of individuals' identities. Additionally, this research investigates the advantages and challenges of using this technology in Iran's educational systems. This research was conducted using an experimental method and random sampling. In this study, 1000 university entrance exam candidates in Iran were selected as samples. The facial images of these candidates were processed and analyzed using FaceMatch software. The software's accuracy and efficiency were evaluated using various metrics, including accuracy rate, error rate, and processing time. The research results indicated that FaceMatch software could accurately identify candidates with a precision of 98.5%. The software's error rate was less than 1.5%, demonstrating its high efficiency in facial recognition. Additionally, the average processing time for each candidate's image was less than 2 seconds, indicating the software's high efficiency. Statistical evaluation of the results using precise statistical tests, including analysis of variance (ANOVA) and t-test, showed that the observed differences were significant, and the software's accuracy in identity verification is high. The findings of this research suggest that FaceMatch software can be effectively used as a tool for identifying university entrance exam candidates in Iran. This technology not only enhances security and prevents fraud but also simplifies and streamlines the exam administration process. However, challenges such as preserving candidates' privacy and the costs of implementation must also be considered. The use of facial recognition technology with FaceMatch software in Iran's educational systems can be an effective solution for preventing fraud and ensuring the authenticity of university entrance exam candidates' identities. Given the promising results of this research, it is recommended that this technology be more widely implemented and utilized in the country's educational systems.Keywords: facial recognition, FaceMatch software, Iran, university entrance exam
Procedia PDF Downloads 48296 Interdisciplinary Evaluations of Children with Autism Spectrum Disorder in a Telehealth Arena
Authors: Janice Keener, Christine Houlihan
Abstract:
Over the last several years, there has been an increase in children identified as having Autism Spectrum Disorder (ASD). Specialists across several disciplines: mental health and medical professionals have been tasked with ensuring accurate and timely evaluations for children with suspected ASD. Due to the nature of the ASD symptom presentation, an interdisciplinary assessment and treatment approach best addresses the needs of the whole child. During the unprecedented COVID-19 Pandemic, clinicians were faced with how to continue with interdisciplinary assessments in a telehealth arena. Instruments that were previously used to assess ASD in-person were no longer appropriate measures to use due to the safety restrictions. For example, The Autism Diagnostic Observation Schedule requires examiners and children to be in very close proximity of each other and if masks or face shields are worn, they render the evaluation invalid. Similar issues arose with the various cognitive measures that are used to assess children such as the Weschler Tests of Intelligence and the Differential Ability Scale. Thus the need arose to identify measures that are able to be safely and accurately administered using safety guidelines. The incidence of ASD continues to rise over time. Currently, the Center for Disease Control estimates that 1 in 59 children meet the criteria for a diagnosis of ASD. The reasons for this increase are likely multifold, including changes in diagnostic criteria, public awareness of the condition, and other environmental and genetic factors. The rise in the incidence of ASD has led to a greater need for diagnostic and treatment services across the United States. The uncertainty of the diagnostic process can lead to an increased level of stress for families of children with suspected ASD. Along with this increase, there is a need for diagnostic clarity to avoid both under and over-identification of this condition. Interdisciplinary assessment is ideal for children with suspected ASD, as it allows for an assessment of the whole child over the course of time and across multiple settings. Clinicians such as Psychologists and Developmental Pediatricians play important roles in the initial evaluation of autism spectrum disorder. An ASD assessment may consist of several types of measures such as standardized checklists, structured interviews, and direct assessments such as the ADOS-2 are just a few examples. With the advent of telehealth clinicians were asked to continue to provide meaningful interdisciplinary assessments via an electronic platform and, in a sense, going to the family home and evaluating the clinical symptom presentation remotely and confidently making an accurate diagnosis. This poster presentation will review the benefits, limitations, and interpretation of these various instruments. The role of other medical professionals will also be addressed, including medical providers, speech pathology, and occupational therapy.Keywords: Autism Spectrum Disorder Assessments, Interdisciplinary Evaluations , Tele-Assessment with Autism Spectrum Disorder, Diagnosis of Autism Spectrum Disorder
Procedia PDF Downloads 209295 Improving Patient and Clinician Experience of Oral Surgery Telephone Clinics
Authors: Katie Dolaghan, Christina Tran, Kim Hamilton, Amanda Beresford, Vicky Adams, Jamie Toole, John Marley
Abstract:
During the Covid 19 pandemic routine outpatient appointments were not possible face to face. That resulted in many branches of healthcare starting virtual clinics. These clinics have continued following the return to face to face patient appointments. With these new types of clinic it is important to ensure that a high standard of patient care is maintained. In order to improve patient and clinician experience of the telephone clinics a quality improvement project was carried out to ensure the patient and clinician experience of these clinics was enhanced whilst remaining a safe, effective and an efficient use of resources. The project began by developing a process map for the consultation process and agreed on the design of a driver diagram and tests of change. In plan do study act (PDSA) cycle1 a single consultant completed an online survey after every patient encounter over a 5 week period. Baseline patient responses were collected using a follow-up telephone survey for each patient. Piloting led to several iterations of both survey designs. Salient results of PDSA1 included; patients not receiving appointment letters, patients feeling more anxious about a virtual appointment and many would prefer a face to face appointment. The initial clinician data showed a positive response with a provisional diagnosis being reached in 96.4% of encounters. PDSA cycle 2 included provision of a patient information sheet and information leaflets relevant to the patients’ conditions were developed and sent following new patient telephone clinics with follow-up survey analysis as before to monitor for signals of change. We also introduced the ability for patients to send an images of their lesion prior to the consultation. Following the changes implemented we noted an improvement in patient satisfaction and, in fact, many patients preferring virtual clinics as it lead to less disruption of their working lives. The extra reading material both before and after the appointments eased patients’ anxiety around virtual clinics and helped them to prepare for their appointment. Following the patient feedback virtual clinics are now used for review patients as well, with all four consultants within the department continuing to utilise virtual clinics. During this presentation the progression of these clinics and the reasons that these clinics are still operating following the return to face to face appointments will be explored. The lessons that have been gained using a QI approach have helped to deliver an optimal service that is valid and reliable as well as being safe, effective and efficient for the patient along with helping reduce the pressures from ever increasing waiting lists. In summary our work in improving the quality of virtual clinics has resulted in improved patient satisfaction along with reduced pressures on the facilities of the health trust.Keywords: clinic, satisfaction, telephone, virtual
Procedia PDF Downloads 58294 Diagnosis of Intermittent High Vibration Peaks in Industrial Gas Turbine Using Advanced Vibrations Analysis
Authors: Abubakar Rashid, Muhammad Saad, Faheem Ahmed
Abstract:
This paper provides a comprehensive study pertaining to diagnosis of intermittent high vibrations on an industrial gas turbine using detailed vibrations analysis, followed by its rectification. Engro Polymer & Chemicals Limited, a Chlor-Vinyl complex located in Pakistan has a captive combined cycle power plant having two 28 MW gas turbines (make Hitachi) & one 15 MW steam turbine. In 2018, the organization faced an issue of high vibrations on one of the gas turbines. These high vibration peaks appeared intermittently on both compressor’s drive end (DE) & turbine’s non-drive end (NDE) bearing. The amplitude of high vibration peaks was between 150-170% on the DE bearing & 200-300% on the NDE bearing from baseline values. In one of these episodes, the gas turbine got tripped on “High Vibrations Trip” logic actuated at 155µm. Limited instrumentation is available on the machine, which is monitored with GE Bently Nevada 3300 system having two proximity probes installed at Turbine NDE, Compressor DE &at Generator DE & NDE bearings. Machine’s transient ramp-up & steady state data was collected using ADRE SXP & DSPI 408. Since only 01 key phasor is installed at Turbine high speed shaft, a derived drive key phasor was configured in ADRE to obtain low speed shaft rpm required for data analysis. By analyzing the Bode plots, Shaft center line plot, Polar plot & orbit plots; rubbing was evident on Turbine’s NDE along with increased bearing clearance of Turbine’s NDE radial bearing. The subject bearing was then inspected & heavy deposition of carbonized coke was found on the labyrinth seals of bearing housing with clear rubbing marks on shaft & housing covering at 20-25 degrees on the inner radius of labyrinth seals. The collected coke sample was tested in laboratory & found to be the residue of lube oil in the bearing housing. After detailed inspection & cleaning of shaft journal area & bearing housing, new radial bearing was installed. Before assembling the bearing housing, cleaning of bearing cooling & sealing air lines was also carried out as inadequate flow of cooling & sealing air can accelerate coke formation in bearing housing. The machine was then taken back online & data was collected again using ADRE SXP & DSPI 408 for health analysis. The vibrations were found in acceptable zone as per ISO standard 7919-3 while all other parameters were also within vendor defined range. As a learning from subject case, revised operating & maintenance regime has also been proposed to enhance machine’s reliability.Keywords: ADRE, bearing, gas turbine, GE Bently Nevada, Hitachi, vibration
Procedia PDF Downloads 146293 Passive Greenhouse Systems in Poland
Authors: Magdalena Grudzińska
Abstract:
Passive systems allow solar radiation to be converted into thermal energy thanks to appropriate building construction. Greenhouse systems are particularly worth attention, due to the low costs of their realization and strong architectural appeal. The paper discusses the energy effects of using passive greenhouse systems, such as glazed balconies, in an example residential building. The research was carried out for five localities in Poland, belonging to climatic zones different in terms of external air temperature and insolation: Koszalin, Poznań, Lublin, Białystok and Zakopane The analysed apartment had a floor area of approximately 74 m² Three thermal zones were distinguished in the flat - the balcony, the room adjacent to it, and the remaining space, for which various internal conditions were defined. Calculations of the energy demand were made using the dynamic simulation program, based on the control volume method. The climatic data were represented by Typical Meteorological Years, prepared on the basis of source data collected from 1971 to 2000. In each locality, the introduction of a passive greenhouse system led to a lower demand for heating in the apartment, and the shortening of the heating season. The smallest effectiveness of passive solar energy systems was noted in Białystok. Demand for heating was reduced there by 14.5% and the heating season remained the longest, due to low temperatures of external air and small sums of solar radiation intensity. In Zakopane, energy savings came to 21% and the heating season was reduced to 107 days, thanks to the greatest insolation during winter. The introduction of greenhouse systems caused an increase in cooling demand in the warmer part of the year, but total energy demand declined in each of the discussed places. However, potential energy savings are smaller if the building's annual life cycle is taken into consideration, and amount from 5.6% up to 14%. Koszalin and Zakopane are localities in which the greenhouse system allows the best energy results to be achieved. It should be emphasized that favourable conditions for introducing greenhouse systems are connected with different climatic conditions. In the seaside area (Koszalin) they result from high temperatures in the heating season and the smallest insolation in the summer period, while in the mountainous area (Zakopane) they result from high insolation in the winter and low temperatures in the summer. In the region of middle and middle-eastern Poland active systems (such as solar energy collectors or photovoltaic panels) could be more beneficial, due to high insolation during summer. It is assessed that passive systems do not eliminate the need for traditional heating in Poland. They can, however, substantially contribute to lower use of non-renewable fuels and the shortening of the heating season. The calculations showed diversification in the effectiveness of greenhouse systems resulting from climatic conditions, and allowed to identify areas which are the most suitable for the passive use of solar radiation.Keywords: solar energy, passive greenhouse systems, glazed balconies, climatic conditions
Procedia PDF Downloads 368292 Approaching a Tat-Rev Independent HIV-1 Clone towards a Model for Research
Authors: Walter Vera-Ortega, Idoia Busnadiego, Sam J. Wilson
Abstract:
Introduction: Human Immunodeficiency Virus type 1 (HIV-1) is responsible for the acquired immunodeficiency syndrome (AIDS), a leading cause of death worldwide infecting millions of people each year. Despite intensive research in vaccine development, therapies against HIV-1 infection are not curative, and the huge genetic variability of HIV-1 challenges to drug development. Current animal models for HIV-1 research present important limitations, impairing the progress of in vivo approaches. Macaques require a CD8+ depletion to progress to AIDS, and the maintenance cost is high. Mice are a cheaper alternative but need to be 'humanized,' and breeding is not possible. The development of an HIV-1 clone able to replicate in mice is a challenging proposal. The lack of human co-factors in mice impedes the function of the HIV-1 accessory proteins, Tat and Rev, hampering HIV-1 replication. However, Tat and Rev function can be replaced by constitutive/chimeric promoters, codon-optimized proteins and the constitutive transport element (CTE), generating a novel HIV-1 clone able to replicate in mice without disrupting the amino acid sequence of the virus. By minimally manipulating the genomic 'identity' of the virus, we propose the generation of an HIV-1 clone able to replicate in mice to assist in antiviral drug development. Methods: i) Plasmid construction: The chimeric promoters and CTE copies were cloned by PCR using lentiviral vectors as templates (pCGSW and pSIV-MPCG). Tat mutants were generated from replication competent HIV-1 plasmids (NHG and NL4-3). ii) Infectivity assays: Retroviral vectors were generated by transfection of human 293T cells and murine NIH 3T3 cells. Virus titre was determined by flow cytometry measuring GFP expression. Human B-cells (AA-2) and Hela cells (TZMbl) were used for infectivity assays. iii) Protein analysis: Tat protein expression was determined by TZMbl assay and HIV-1 capsid by western blot. Results: We have determined that NIH 3T3 cells are able to generate HIV-1 particles. However, they are not infectious, and further analysis needs to be performed. Codon-optimized HIV-1 constructs are efficiently made in 293T cells in a Tat and Rev independent manner and capable of packaging a competent genome in trans. CSGW is capable of generating infectious particles in the absence of Tat and Rev in human cells when 4 copies of the CTE are placed preceding the 3’LTR. HIV-1 Tat mutant clones encoding different promoters are functional during the first cycle of replication when Tat is added in trans. Conclusion: Our findings suggest that the development of an HIV-1 Tat-Rev independent clone is challenging but achievable aim. However, further investigations need to be developed prior presenting our HIV-1 clone as a candidate model for research.Keywords: codon-optimized, constitutive transport element, HIV-1, long terminal repeats, research model
Procedia PDF Downloads 308291 New Gas Geothermometers for the Prediction of Subsurface Geothermal Temperatures: An Optimized Application of Artificial Neural Networks and Geochemometric Analysis
Authors: Edgar Santoyo, Daniel Perez-Zarate, Agustin Acevedo, Lorena Diaz-Gonzalez, Mirna Guevara
Abstract:
Four new gas geothermometers have been derived from a multivariate geo chemometric analysis of a geothermal fluid chemistry database, two of which use the natural logarithm of CO₂ and H2S concentrations (mmol/mol), respectively, and the other two use the natural logarithm of the H₂S/H₂ and CO₂/H₂ ratios. As a strict compilation criterion, the database was created with gas-phase composition of fluids and bottomhole temperatures (BHTM) measured in producing wells. The calibration of the geothermometers was based on the geochemical relationship existing between the gas-phase composition of well discharges and the equilibrium temperatures measured at bottomhole conditions. Multivariate statistical analysis together with the use of artificial neural networks (ANN) was successfully applied for correlating the gas-phase compositions and the BHTM. The predicted or simulated bottomhole temperatures (BHTANN), defined as output neurons or simulation targets, were statistically compared with measured temperatures (BHTM). The coefficients of the new geothermometers were obtained from an optimized self-adjusting training algorithm applied to approximately 2,080 ANN architectures with 15,000 simulation iterations each one. The self-adjusting training algorithm used the well-known Levenberg-Marquardt model, which was used to calculate: (i) the number of neurons of the hidden layer; (ii) the training factor and the training patterns of the ANN; (iii) the linear correlation coefficient, R; (iv) the synaptic weighting coefficients; and (v) the statistical parameter, Root Mean Squared Error (RMSE) to evaluate the prediction performance between the BHTM and the simulated BHTANN. The prediction performance of the new gas geothermometers together with those predictions inferred from sixteen well-known gas geothermometers (previously developed) was statistically evaluated by using an external database for avoiding a bias problem. Statistical evaluation was performed through the analysis of the lowest RMSE values computed among the predictions of all the gas geothermometers. The new gas geothermometers developed in this work have been successfully used for predicting subsurface temperatures in high-temperature geothermal systems of Mexico (e.g., Los Azufres, Mich., Los Humeros, Pue., and Cerro Prieto, B.C.) as well as in a blind geothermal system (known as Acoculco, Puebla). The last results of the gas geothermometers (inferred from gas-phase compositions of soil-gas bubble emissions) compare well with the temperature measured in two wells of the blind geothermal system of Acoculco, Puebla (México). Details of this new development are outlined in the present research work. Acknowledgements: The authors acknowledge the funding received from CeMIE-Geo P09 project (SENER-CONACyT).Keywords: artificial intelligence, gas geochemistry, geochemometrics, geothermal energy
Procedia PDF Downloads 352290 Multi-Labeled Aromatic Medicinal Plant Image Classification Using Deep Learning
Authors: Tsega Asresa, Getahun Tigistu, Melaku Bayih
Abstract:
Computer vision is a subfield of artificial intelligence that allows computers and systems to extract meaning from digital images and video. It is used in a wide range of fields of study, including self-driving cars, video surveillance, medical diagnosis, manufacturing, law, agriculture, quality control, health care, facial recognition, and military applications. Aromatic medicinal plants are botanical raw materials used in cosmetics, medicines, health foods, essential oils, decoration, cleaning, and other natural health products for therapeutic and Aromatic culinary purposes. These plants and their products not only serve as a valuable source of income for farmers and entrepreneurs but also going to export for valuable foreign currency exchange. In Ethiopia, there is a lack of technologies for the classification and identification of Aromatic medicinal plant parts and disease type cured by aromatic medicinal plants. Farmers, industry personnel, academicians, and pharmacists find it difficult to identify plant parts and disease types cured by plants before ingredient extraction in the laboratory. Manual plant identification is a time-consuming, labor-intensive, and lengthy process. To alleviate these challenges, few studies have been conducted in the area to address these issues. One way to overcome these problems is to develop a deep learning model for efficient identification of Aromatic medicinal plant parts with their corresponding disease type. The objective of the proposed study is to identify the aromatic medicinal plant parts and their disease type classification using computer vision technology. Therefore, this research initiated a model for the classification of aromatic medicinal plant parts and their disease type by exploring computer vision technology. Morphological characteristics are still the most important tools for the identification of plants. Leaves are the most widely used parts of plants besides roots, flowers, fruits, and latex. For this study, the researcher used RGB leaf images with a size of 128x128 x3. In this study, the researchers trained five cutting-edge models: convolutional neural network, Inception V3, Residual Neural Network, Mobile Network, and Visual Geometry Group. Those models were chosen after a comprehensive review of the best-performing models. The 80/20 percentage split is used to evaluate the model, and classification metrics are used to compare models. The pre-trained Inception V3 model outperforms well, with training and validation accuracy of 99.8% and 98.7%, respectively.Keywords: aromatic medicinal plant, computer vision, convolutional neural network, deep learning, plant classification, residual neural network
Procedia PDF Downloads 187289 A Strength Weaknesses Opportunities and Threats Analysis of Socialisation Externalisation Combination and Internalisation Modes in Knowledge Management Practice: A Systematic Review of Literature
Authors: Aderonke Olaitan Adesina
Abstract:
Background: The paradigm shift to knowledge, as the key to organizational innovation and competitive advantage, has made the management of knowledge resources in organizations a mandate. A key component of the knowledge management (KM) cycle is knowledge creation, which is researched to be the result of the interaction between explicit and tacit knowledge. An effective knowledge creation process requires the use of the right model. The SECI (Socialisation, Externalisation, Combination, and Internalisation) model, proposed in 1995, is attested to be a preferred model of choice for knowledge creation activities. The model has, however, been criticized by researchers, who raise their concern, especially about its sequential nature. Therefore, this paper reviews extant literature on the practical application of each mode of the SECI model, from 1995 to date, with a view to ascertaining the relevance in modern-day KM practice. The study will establish the trends of use, with regards to the location and industry of use, and the interconnectedness of the modes. The main research question is, for organizational knowledge creation activities, is the SECI model indeed linear and sequential? In other words, does the model need to be reviewed in today’s KM practice? The review will generate a compendium of the usage of the SECI modes and propose a framework of use, based on the strength weaknesses opportunities and threats (SWOT) findings of the study. Method: This study will employ the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology to investigate the usage and SWOT of the modes, in order to ascertain the success, or otherwise, of the sequential application of the modes in practice from 1995 to 2019. To achieve the purpose, four databases will be explored to search for open access, peer-reviewed articles from 1995 to 2019. The year 1995 is chosen as the baseline because it was the year the first paper on the SECI model was published. The study will appraise relevant peer-reviewed articles under the search terms: SECI (or its synonym, knowledge creation theory), socialization, externalization, combination, and internalization in the title, abstract, or keywords list. This review will include only empirical studies of knowledge management initiatives in which the SECI model and its modes were used. Findings: It is expected that the study will highlight the practical relevance of each mode of the SECI model, the linearity or not of the model, the SWOT in each mode. Concluding Statement: Organisations can, from the analysis, determine the modes of emphasis for their knowledge creation activities. It is expected that the study will support decision making in the choice of the SECI model as a strategy for the management of organizational knowledge resources, and in appropriating the SECI model, or its remodeled version, as a theoretical framework in future KM research.Keywords: combination, externalisation, internalisation, knowledge management, SECI model, socialisation
Procedia PDF Downloads 355288 Civilian and Military Responses to Domestic Security Threats: A Cross-Case Analysis of Belgium, France, and the United Kingdom
Authors: John Hardy
Abstract:
The domestic security environment in Europe has changed dramatically in recent years. Since January 2015, a significant number of domestic security threats that emerged in Europe were located in Belgium, France and the United Kingdom. While some threats were detected in the planning phase, many also resulted in terrorist attacks. Authorities in all three countries instituted special or emergency measures to provide additional security to their populations. Each country combined an additional policing presence with a specific military operation to contribute to a comprehensive security response to domestic threats. This study presents a cross-case analysis of three countries’ civilian and military responses to domestic security threats in Europe. Each case study features a unique approach to combining civilian and military capabilities in similar domestic security operations during the same time period and threat environment. The research design focuses on five variables relevant to the relationship between civilian and military roles in each security response. These are the distinction between policing and military roles, the legal framework for the domestic deployment of military forces, prior experience in civil-military coordination, the institutional framework for threat assessments, and the level of public support for the domestic use of military forces. These variables examine the influence of domestic social, political, and legal factors on the design of combined civil-military operations in response to domestic security threats. Each case study focuses on a specific operation: Operation Vigilant Guard in Belgium, Operation Sentinel in France, and Operation Temperer in the United Kingdom. The results demonstrate that the level of distinction between policing and military roles and the existence of a clear and robust legal framework for the domestic use force by military personnel significantly influence the design and implementation of civilian and military roles in domestic security operations. The findings of this study indicate that Belgium, France and the United Kingdom experienced different design and implementation challenges for their domestic security operations. Belgium and France initially had less-developed legal frameworks for deploying the military in domestic security operations than the United Kingdom. This was offset by public support for enacting emergency measures and the strength of existing civil-military coordination mechanisms. The United Kingdom had a well-developed legal framework for integrating civilian and military capabilities in domestic security operations. However, its experiences in Ireland also made the government more sensitive to public perceptions regarding the domestic deployment of military forces.Keywords: counter-terrorism, democracy, homeland security, intelligence, militarization, policing
Procedia PDF Downloads 142287 Targeted Delivery of Docetaxel Drug Using Cetuximab Conjugated Vitamin E TPGS Micelles Increases the Anti-Tumor Efficacy and Inhibit Migration of MDA-MB-231 Triple Negative Breast Cancer
Authors: V. K. Rajaletchumy, S. L. Chia, M. I. Setyawati, M. S. Muthu, S. S. Feng, D. T. Leong
Abstract:
Triple negative breast cancers (TNBC) can be classified as one of the most aggressive with a high rate of local recurrences and systematic metastases. TNBCs are insensitive to existing hormonal therapy or targeted therapies such as the use of monoclonal antibodies, due to the lack of oestrogen receptor (ER) and progesterone receptor (PR) and the absence of overexpression of human epidermal growth factor receptor 2 (HER2) compared with other types of breast cancers. The absence of targeted therapies for selective delivery of therapeutic agents into tumours, led to the search for druggable targets in TNBC. In this study, we developed a targeted micellar system of cetuximab-conjugated micelles of D-α-tocopheryl polyethylene glycol succinate (vitamin E TPGS) for targeted delivery of docetaxel as a model anticancer drug for the treatment of TNBCs. We examined the efficacy of our micellar system in xenograft models of triple negative breast cancers and explored the effect of the micelles on post-treatment tumours in order to elucidate the mechanism underlying the nanomedicine treatment in oncology. The targeting micelles were found preferentially accumulated in tumours immediately after the administration of the micelles compare to normal tissue. The fluorescence signal gradually increased up to 12 h at the tumour site and sustained for up to 24 h, reflecting the increases in targeted micelles (TPFC) micelles in MDA-MB-231/Luc cells. In comparison, for the non-targeting micelles (TPF), the fluorescence signal was evenly distributed all over the body of the mice. Only a slight increase in fluorescence at the chest area was observed after 24 h post-injection, reflecting the moderate uptake of micelles by the tumour. The successful delivery of docetaxel into tumour by the targeted micelles (TPDC) exhibited a greater degree of tumour growth inhibition than Taxotere® after 15 days of treatment. The ex vivo study has demonstrated that tumours treated with targeting micelles exhibit enhanced cell cycle arrest and attenuated proliferation compared with the control and with those treated non-targeting micelles. Furthermore, the ex vivo investigation revealed that both the targeting and non-targeting micellar formulations shows significant inhibition of cell migration with migration indices reduced by 0.098- and 0.28-fold, respectively, relative to the control. Overall, both the in vivo and ex vivo data increased the confidence that our micellar formulations effectively targeted and inhibited EGF-overexpressing MDA-MB-231 tumours.Keywords: biodegradable polymers, cancer nanotechnology, drug targeting, molecular biomaterials, nanomedicine
Procedia PDF Downloads 281286 Evotrader: Bitcoin Trading Using Evolutionary Algorithms on Technical Analysis and Social Sentiment Data
Authors: Martin Pellon Consunji
Abstract:
Due to the rise in popularity of Bitcoin and other crypto assets as a store of wealth and speculative investment, there is an ever-growing demand for automated trading tools, such as bots, in order to gain an advantage over the market. Traditionally, trading in the stock market was done by professionals with years of training who understood patterns and exploited market opportunities in order to gain a profit. However, nowadays a larger portion of market participants are at minimum aided by market-data processing bots, which can generally generate more stable signals than the average human trader. The rise in trading bot usage can be accredited to the inherent advantages that bots have over humans in terms of processing large amounts of data, lack of emotions of fear or greed, and predicting market prices using past data and artificial intelligence, hence a growing number of approaches have been brought forward to tackle this task. However, the general limitation of these approaches can still be broken down to the fact that limited historical data doesn’t always determine the future, and that a lot of market participants are still human emotion-driven traders. Moreover, developing markets such as those of the cryptocurrency space have even less historical data to interpret than most other well-established markets. Due to this, some human traders have gone back to the tried-and-tested traditional technical analysis tools for exploiting market patterns and simplifying the broader spectrum of data that is involved in making market predictions. This paper proposes a method which uses neuro evolution techniques on both sentimental data and, the more traditionally human-consumed, technical analysis data in order to gain a more accurate forecast of future market behavior and account for the way both automated bots and human traders affect the market prices of Bitcoin and other cryptocurrencies. This study’s approach uses evolutionary algorithms to automatically develop increasingly improved populations of bots which, by using the latest inflows of market analysis and sentimental data, evolve to efficiently predict future market price movements. The effectiveness of the approach is validated by testing the system in a simulated historical trading scenario, a real Bitcoin market live trading scenario, and testing its robustness in other cryptocurrency and stock market scenarios. Experimental results during a 30-day period show that this method outperformed the buy and hold strategy by over 260% in terms of net profits, even when taking into consideration standard trading fees.Keywords: neuro-evolution, Bitcoin, trading bots, artificial neural networks, technical analysis, evolutionary algorithms
Procedia PDF Downloads 123285 Emotion-Convolutional Neural Network for Perceiving Stress from Audio Signals: A Brain Chemistry Approach
Authors: Anup Anand Deshmukh, Catherine Soladie, Renaud Seguier
Abstract:
Emotion plays a key role in many applications like healthcare, to gather patients’ emotional behavior. Unlike typical ASR (Automated Speech Recognition) problems which focus on 'what was said', it is equally important to understand 'how it was said.' There are certain emotions which are given more importance due to their effectiveness in understanding human feelings. In this paper, we propose an approach that models human stress from audio signals. The research challenge in speech emotion detection is finding the appropriate set of acoustic features corresponding to an emotion. Another difficulty lies in defining the very meaning of emotion and being able to categorize it in a precise manner. Supervised Machine Learning models, including state of the art Deep Learning classification methods, rely on the availability of clean and labelled data. One of the problems in affective computation is the limited amount of annotated data. The existing labelled emotions datasets are highly subjective to the perception of the annotator. We address the first issue of feature selection by exploiting the use of traditional MFCC (Mel-Frequency Cepstral Coefficients) features in Convolutional Neural Network. Our proposed Emo-CNN (Emotion-CNN) architecture treats speech representations in a manner similar to how CNN’s treat images in a vision problem. Our experiments show that Emo-CNN consistently and significantly outperforms the popular existing methods over multiple datasets. It achieves 90.2% categorical accuracy on the Emo-DB dataset. We claim that Emo-CNN is robust to speaker variations and environmental distortions. The proposed approach achieves 85.5% speaker-dependant categorical accuracy for SAVEE (Surrey Audio-Visual Expressed Emotion) dataset, beating the existing CNN based approach by 10.2%. To tackle the second problem of subjectivity in stress labels, we use Lovheim’s cube, which is a 3-dimensional projection of emotions. Monoamine neurotransmitters are a type of chemical messengers in the brain that transmits signals on perceiving emotions. The cube aims at explaining the relationship between these neurotransmitters and the positions of emotions in 3D space. The learnt emotion representations from the Emo-CNN are mapped to the cube using three component PCA (Principal Component Analysis) which is then used to model human stress. This proposed approach not only circumvents the need for labelled stress data but also complies with the psychological theory of emotions given by Lovheim’s cube. We believe that this work is the first step towards creating a connection between Artificial Intelligence and the chemistry of human emotions.Keywords: deep learning, brain chemistry, emotion perception, Lovheim's cube
Procedia PDF Downloads 154284 A Holistic View of Microbial Community Dynamics during a Toxic Harmful Algal Bloom
Authors: Shi-Bo Feng, Sheng-Jie Zhang, Jin Zhou
Abstract:
The relationship between microbial diversity and algal bloom has received considerable attention for decades. Microbes undoubtedly affect annual bloom events and impact the physiology of both partners, as well as shape ecosystem diversity. However, knowledge about interactions and network correlations among broader-spectrum microbes that lead to the dynamics in a complete bloom cycle are limited. In this study, pyrosequencing and network approaches simultaneously assessed the associate patterns among bacteria, archaea, and microeukaryotes in surface water and sediments in response to a natural dinoflagellate (Alexandrium sp.) bloom. In surface water, among the bacterial community, Gamma-Proteobacteria and Bacteroidetes dominated in the initial bloom stage, while Alpha-Proteobacteria, Cyanobacteria, and Actinobacteria become the most abundant taxa during the post-stage. In the archaea biosphere, it clustered predominantly with Methanogenic members in the early pre-bloom period while the majority of species identified in the later-bloom stage were ammonia-oxidizing archaea and Halobacteriales. In eukaryotes, dinoflagellate (Alexandrium sp.) was dominated in the onset stage, whereas multiply species (such as microzooplankton, diatom, green algae, and rotifera) coexistence in bloom collapse stag. In sediments, the microbial species biomass and richness are much higher than the water body. Only Flavobacteriales and Rhodobacterales showed a slight response to bloom stages. Unlike the bacteria, there are small fluctuations of archaeal and eukaryotic structure in the sediment. The network analyses among the inter-specific associations show that bacteria (Alteromonadaceae, Oceanospirillaceae, Cryomorphaceae, and Piscirickettsiaceae) and some zooplankton (Mediophyceae, Mamiellophyceae, Dictyochophyceae and Trebouxiophyceae) have a stronger impact on the structuring of phytoplankton communities than archaeal effects. The changes in population were also significantly shaped by water temperature and substrate availability (N & P resources). The results suggest that clades are specialized at different time-periods and that the pre-bloom succession was mainly a bottom-up controlled, and late-bloom period was controlled by top-down patterns. Additionally, phytoplankton and prokaryotic communities correlated better with each other, which indicate interactions among microorganisms are critical in controlling plankton dynamics and fates. Our results supplied a wider view (temporal and spatial scales) to understand the microbial ecological responses and their network association during algal blooming. It gives us a potential multidisciplinary explanation for algal-microbe interaction and helps us beyond the traditional view linked to patterns of algal bloom initiation, development, decline, and biogeochemistry.Keywords: microbial community, harmful algal bloom, ecological process, network
Procedia PDF Downloads 114283 Effect of Vitrification on Embryos Euploidy Obtained from Thawed Oocytes
Authors: Natalia Buderatskaya, Igor Ilyin, Julia Gontar, Sergey Lavrynenko, Olga Parnitskaya, Ekaterina Ilyina, Eduard Kapustin, Yana Lakhno
Abstract:
Introduction: It is known that cryopreservation of oocytes has peculiar features due to the complex structure of the oocyte. One of the most important features is that mature oocytes contain meiotic division spindle which is very sensitive even to the slightest variation in temperature. Thus, the main objective of this study is to analyse the resulting euploid embryos obtained from thawed oocytes in comparison with the data of preimplantation genetic screening (PGS) in fresh embryo cycles. Material and Methods: The study was conducted at 'Medical Centre IGR' from January to July 2016. Data were analysed for 908 donor oocytes obtained in 67 cycles of assisted reproductive technologies (ART), of which 693 oocytes were used in the 51 'fresh' cycles (group A), and 215 oocytes - 16 ART programs with vitrification female gametes (group B). The average age of donors in the groups match 27.3±2.9 and 27.8±6.6 years. Stimulation of superovulation was conducted the standard way. Vitrification was performed in 1-2 hours after transvaginal puncture and thawing of oocytes were carried out in accordance with the standard protocol of Cryotech (Japan). Manipulation ICSI was performed 4-5 hours after transvaginal follicle puncture for fresh oocytes, or after defrosting - for vitrified female gametes. For the PGS, an embryonic biopsy was done on the third or on the fifth day after fertilization. Diagnostic procedures were performed using fluorescence in situ hybridization with the study of such chromosomes as 13, 16, 18, 21, 22, X, Y. Only morphologically quality blastocysts were used for the transfer, the estimation of which corresponded to the Gardner criteria. The statistical hypotheses were done using the criteria t, x^2 at a significance levels p<0.05, p<0.01, p<0.001. Results: The mean number of mature oocytes per cycle in group A was 13.58±6.65 and in group B - 13.44±6.68 oocytes for patient. The survival of oocytes after thawing totaled 95.3% (n=205), which indicates a highly effective quality of performed vitrification. The proportion of zygotes in the group A corresponded to 91.1%(n=631), in the group B – 80.5%(n=165), which shows statistically significant difference between the groups (p<0.001) and explained by non-viable oocytes elimination after vitrification. This is confirmed by the fact that on the fifth day of embryos development a statistically significant difference in the number of blastocysts was absent (p>0.05), and constituted respectively 61.6%(n=389) and 63.0%(n=104) in the groups. For the PGS performing 250 embryos analyzed in the group A and 72 embryos - in the group B. The results showed that euploidy in the studied chromosomes were 40.0%(n=100) embryos in the group A and 41.7% (n=30) - in the group B, which shows no statistical significant difference (p>0.05). The indicators of clinical pregnancies in the groups amounted to 64.7% (22 pregnancies per 34 embryo transfers) and 61.5% (8 pregnancies per 13 embryo transfers) respectively, and also had no significant difference between the groups (p>0.05). Conclusions: The results showed that the vitrification does not affect the resulting euploid embryos in assisted reproductive technologies and are not reflected in their morphological characteristics in ART programs.Keywords: euploid embryos, preimplantation genetic screening, thawing oocytes, vitrification
Procedia PDF Downloads 334282 Debriefing Practices and Models: An Integrative Review
Authors: Judson P. LaGrone
Abstract:
Simulation-based education in curricula was once a luxurious component of nursing programs but now serves as a vital element of an individual’s learning experience. A debriefing occurs after the simulation scenario or clinical experience is completed to allow the instructor(s) or trained professional(s) to act as a debriefer to guide a reflection with a purpose of acknowledging, assessing, and synthesizing the thought process, decision-making process, and actions/behaviors performed during the scenario or clinical experience. Debriefing is a vital component of the simulation process and educational experience to allow the learner(s) to progressively build upon past experiences and current scenarios within a safe and welcoming environment with a guided dialog to enhance future practice. The aim of this integrative review was to assess current practices of debriefing models in simulation-based education for health care professionals and students. The following databases were utilized for the search: CINAHL Plus, Cochrane Database of Systemic Reviews, EBSCO (ERIC), PsycINFO (Ovid), and Google Scholar. The advanced search option was useful to narrow down the search of articles (full text, Boolean operators, English language, peer-reviewed, published in the past five years). Key terms included debrief, debriefing, debriefing model, debriefing intervention, psychological debriefing, simulation, simulation-based education, simulation pedagogy, health care professional, nursing student, and learning process. Included studies focus on debriefing after clinical scenarios of nursing students, medical students, and interprofessional teams conducted between 2015 and 2020. Common themes were identified after the analysis of articles matching the search criteria. Several debriefing models are addressed in the literature with similarities of effectiveness for participants in clinical simulation-based pedagogy. Themes identified included (a) importance of debriefing in simulation-based pedagogy, (b) environment for which debriefing takes place is an important consideration, (c) individuals who should conduct the debrief, (d) length of debrief, and (e) methodology of the debrief. Debriefing models supported by theoretical frameworks and facilitated by trained staff are vital for a successful debriefing experience. Models differed from self-debriefing, facilitator-led debriefing, video-assisted debriefing, rapid cycle deliberate practice, and reflective debriefing. A reoccurring finding was centered around the emphasis of continued research for systematic tool development and analysis of the validity and effectiveness of current debriefing practices. There is a lack of consistency of debriefing models among nursing curriculum with an increasing rate of ill-prepared faculty to facilitate the debriefing phase of the simulation.Keywords: debriefing model, debriefing intervention, health care professional, simulation-based education
Procedia PDF Downloads 142281 Study of Chemical State Analysis of Rubidium Compounds in Lα, Lβ₁, Lβ₃,₄ and Lγ₂,₃ X-Ray Emission Lines with Wavelength Dispersive X-Ray Fluorescence Spectrometer
Authors: Harpreet Singh Kainth
Abstract:
Rubidium salts have been commonly used as an electrolyte to improve the efficiency cycle of Li-ion batteries. In recent years, it has been implemented into the large scale for further technological advances to improve the performance rate and better cyclability in the batteries. X-ray absorption spectroscopy (XAS) is a powerful tool for obtaining the information in the electronic structure which involves the chemical state analysis in the active materials used in the batteries. However, this technique is not well suited for the industrial applications because it needs a synchrotron X-ray source and special sample file for in-situ measurements. In contrast to this, conventional wavelength dispersive X-ray fluorescence (WDXRF) spectrometer is nondestructive technique used to study the chemical shift in all transitions (K, L, M, …) and does not require any special pre-preparation planning. In the present work, the fluorescent Lα, Lβ₁ , Lβ₃,₄ and Lγ₂,₃ X-ray spectra of rubidium in different chemical forms (Rb₂CO₃ , RbCl, RbBr, and RbI) have been measured first time with high resolution wavelength dispersive X-ray fluorescence (WDXRF) spectrometer (Model: S8 TIGER, Bruker, Germany), equipped with an Rh anode X-ray tube (4-kW, 60 kV and 170 mA). In ₃₇Rb compounds, the measured energy shifts are in the range (-0.45 to - 1.71) eV for Lα X-ray peak, (0.02 to 0.21) eV for Lβ₁ , (0.04 to 0.21) eV for Lβ₃ , (0.15 to 0.43) eV for Lβ₄ and (0.22 to 0.75) eV for Lγ₂,₃ X-ray emission lines. The chemical shifts in rubidium compounds have been measured by considering Rb₂CO₃ compounds taking as a standard reference. A Voigt function is used to determine the central peak position of all compounds. Both positive and negative shifts have been observed in L shell emission lines. In Lα X-ray emission lines, all compounds show negative shift while in Lβ₁, Lβ₃,₄, and Lγ₂,₃ X-ray emission lines, all compounds show a positive shift. These positive and negative shifts result increase or decrease in X-ray energy shifts. It looks like that ligands attached with central metal atom attract or repel the electrons towards or away from the parent nucleus. This pulling and pushing character of rubidium affects the central peak position of the compounds which causes a chemical shift. To understand the chemical effect more briefly, factors like electro-negativity, line intensity ratio, effective charge and bond length are responsible for the chemical state analysis in rubidium compounds. The effective charge has been calculated from Suchet and Pauling method while the line intensity ratio has been calculated by calculating the area under the relevant emission peak. In the present work, it has been observed that electro-negativity, effective charge and intensity ratio (Lβ₁/Lα, Lβ₃,₄/Lα and Lγ₂,₃/Lα) are inversely proportional to the chemical shift (RbCl > RbBr > RbI), while bond length has been found directly proportional to the chemical shift (RbI > RbBr > RbCl).Keywords: chemical shift in L emission lines, bond length, electro-negativity, effective charge, intensity ratio, Rubidium compounds, WDXRF spectrometer
Procedia PDF Downloads 507280 Advances in Design Decision Support Tools for Early-stage Energy-Efficient Architectural Design: A Review
Authors: Maryam Mohammadi, Mohammadjavad Mahdavinejad, Mojtaba Ansari
Abstract:
The main driving force for increasing movement towards the design of High-Performance Buildings (HPB) are building codes and rating systems that address the various components of the building and their impact on the environment and energy conservation through various methods like prescriptive methods or simulation-based approaches. The methods and tools developed to meet these needs, which are often based on building performance simulation tools (BPST), have limitations in terms of compatibility with the integrated design process (IDP) and HPB design, as well as use by architects in the early stages of design (when the most important decisions are made). To overcome these limitations in recent years, efforts have been made to develop Design Decision Support Systems, which are often based on artificial intelligence. Numerous needs and steps for designing and developing a Decision Support System (DSS), which complies with the early stages of energy-efficient architecture design -consisting of combinations of different methods in an integrated package- have been listed in the literature. While various review studies have been conducted in connection with each of these techniques (such as optimizations, sensitivity and uncertainty analysis, etc.) and their integration of them with specific targets; this article is a critical and holistic review of the researches which leads to the development of applicable systems or introduction of a comprehensive framework for developing models complies with the IDP. Information resources such as Science Direct and Google Scholar are searched using specific keywords and the results are divided into two main categories: Simulation-based DSSs and Meta-simulation-based DSSs. The strengths and limitations of different models are highlighted, two general conceptual models are introduced for each category and the degree of compliance of these models with the IDP Framework is discussed. The research shows movement towards Multi-Level of Development (MOD) models, well combined with early stages of integrated design (schematic design stage and design development stage), which are heuristic, hybrid and Meta-simulation-based, relies on Big-real Data (like Building Energy Management Systems Data or Web data). Obtaining, using and combining of these data with simulation data to create models with higher uncertainty, more dynamic and more sensitive to context and culture models, as well as models that can generate economy-energy-efficient design scenarios using local data (to be more harmonized with circular economy principles), are important research areas in this field. The results of this study are a roadmap for researchers and developers of these tools.Keywords: integrated design process, design decision support system, meta-simulation based, early stage, big data, energy efficiency
Procedia PDF Downloads 162279 The French Ekang Ethnographic Dictionary. The Quantum Approach
Authors: Henda Gnakate Biba, Ndassa Mouafon Issa
Abstract:
Dictionaries modeled on the Western model [tonic accent languages] are not suitable and do not account for tonal languages phonologically, which is why the [prosodic and phonological] ethnographic dictionary was designed. It is a glossary that expresses the tones and the rhythm of words. It recreates exactly the speaking or singing of a tonal language, and allows the non-speaker of this language to pronounce the words as if they were a native. It is a dictionary adapted to tonal languages. It was built from ethnomusicological theorems and phonological processes, according to Jean. J. Rousseau 1776 hypothesis /To say and to sing were once the same thing/. Each word in the French dictionary finds its corresponding language, ekaη. And each word ekaη is written on a musical staff. This ethnographic dictionary is also an inventive, original and innovative research thesis, but it is also an inventive, original and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation and the question of modeling in the human sciences: mathematics, computer science, translation automation and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal and random music. The experimentation confirming the theorization designed a semi-digital, semi-analog application which translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music and deterministic and random music). To test this application, I use a music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). Translation is done (from writing to writing, from writing to speech and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz and, world music or, variety etc. The software runs, giving you the option to choose harmonies, and then you select your melody.Keywords: music, language, entenglement, science, research
Procedia PDF Downloads 69278 Balanced Score Card a Tool to Improve Naac Accreditation – a Case Study in Indian Higher Education
Authors: CA Kishore S. Peshori
Abstract:
Introduction: India, a country with vast diversity and huge population is going to have largest young population by 2020. Higher education has and will always be the basic requirement for making a developing nation to a developed nation. To improve any system it needs to be bench-marked. There have been various tools for bench-marking the systems. Education is delivered in India by universities which are mainly funded by government. This universities for delivering the education sets up colleges which are again funded mainly by government. Recently however there has also been autonomy given to universities and colleges. Moreover foreign universities are waiting to enter Indian boundaries. With a large number of universities and colleges it has become more and more necessary to measure this institutes for bench-marking. There have been various tools for measuring the institute. In India college assessments have been made compulsory by UGC. Naac has been offically recognised as the accrediation criteria. The Naac criteria has been based on seven criterias namely: 1. Curricular assessments, 2. Teaching learning and evaluation, 3. Research Consultancy and Extension, 4. Infrastructure and learning resources, 5. Student support and progression, 6. Governance leadership and management, 7. Innovation and best practices. The Naac tries to bench mark the institution for identification, sustainability, dissemination and adaption of best practices. It grades the institution according to this seven criteria and the funding of institution is based on these grades. Many of the colleges are struggling to get best of grades but they have not come across a systematic tool to achieve the results. Balanced Scorecard developed by Kaplan has been a successful tool for corporates to develop best of practices so as to increase their financial performance and also retain and increase their customers so as to grow the organization to next level.It is time to test this tool for an educational institute. Methodology: The paper tries to develop a prototype for college based on the secondary data. Once a prototype is developed the researcher based on questionnaire will try to test this tool for successful implementation. The success of this research will depend on its implementation of BSC on an institute and its grading improved due to this successful implementation. Limitation of time is a major constraint in this research as Naac cycle takes minimum 4 years for accreditation and reaccreditation the methodology will limit itself to secondary data and questionnaire to be circulated to colleges along with the prototype model of BSC. Conclusion: BSC is a successful tool for enhancing growth of an organization. Educational institutes are no exception to these. BSC will only have to be realigned to suit the Naac criteria. Once this prototype is developed the success will be tested only on its implementation but this research paper will be the first step towards developing this tool and will also initiate the success by developing a questionnaire and getting and evaluating the responses for moving to the next level of actual implementationKeywords: balanced scorecard, bench marking, Naac, UGC
Procedia PDF Downloads 272277 Optimizing Weight Loss with AI (GenAISᵀᴹ): A Randomized Trial of Dietary Supplement Prescriptions in Obese Patients
Authors: Evgeny Pokushalov, Andrey Ponomarenko, John Smith, Michael Johnson, Claire Garcia, Inessa Pak, Evgenya Shrainer, Dmitry Kudlay, Sevda Bayramova, Richard Miller
Abstract:
Background: Obesity is a complex, multifactorial chronic disease that poses significant health risks. Recent advancements in artificial intelligence (AI) offer the potential for more personalized and effective dietary supplement (DS) regimens to promote weight loss. This study aimed to evaluate the efficacy of AI-guided DS prescriptions compared to standard physician-guided DS prescriptions in obese patients. Methods: This randomized, parallel-group pilot study enrolled 60 individuals aged 40 to 60 years with a body mass index (BMI) of 25 or greater. Participants were randomized to receive either AI-guided DS prescriptions (n = 30) or physician-guided DS prescriptions (n = 30) for 180 days. The primary endpoints were the percentage change in body weight and the proportion of participants achieving a ≥5% weight reduction. Secondary endpoints included changes in BMI, fat mass, visceral fat rating, systolic and diastolic blood pressure, lipid profiles, fasting plasma glucose, hsCRP levels, and postprandial appetite ratings. Adverse events were monitored throughout the study. Results: Both groups were well balanced in terms of baseline characteristics. Significant weight loss was observed in the AI-guided group, with a mean reduction of -12.3% (95% CI: -13.1 to -11.5%) compared to -7.2% (95% CI: -8.1 to -6.3%) in the physician-guided group, resulting in a treatment difference of -5.1% (95% CI: -6.4 to -3.8%; p < 0.01). At day 180, 84.7% of the AI-guided group achieved a weight reduction of ≥5%, compared to 54.5% in the physician-guided group (Odds Ratio: 4.3; 95% CI: 3.1 to 5.9; p < 0.01). Significant improvements were also observed in BMI, fat mass, and visceral fat rating in the AI-guided group (p < 0.01 for all). Postprandial appetite suppression was greater in the AI-guided group, with significant reductions in hunger and prospective food consumption, and increases in fullness and satiety (p < 0.01 for all). Adverse events were generally mild-to-moderate, with higher incidences of gastrointestinal symptoms in the AI-guided group, but these were manageable and did not impact adherence. Conclusion: The AI-guided dietary supplement regimen was more effective in promoting weight loss, improving body composition, and suppressing appetite compared to the physician-guided regimen. These findings suggest that AI-guided, personalized supplement prescriptions could offer a more effective approach to managing obesity. Further research with larger sample sizes is warranted to confirm these results and optimize AI-based interventions for weight loss.Keywords: obesity, AI-guided, dietary supplements, weight loss, personalized medicine, metabolic health, appetite suppression
Procedia PDF Downloads 9276 A Brazilian Study Applied to the Regulatory Environmental Issues of Nanomaterials
Authors: Luciana S. Almeida
Abstract:
Nanotechnology has revolutionized the world of science and technology bringing great expectations due to its great potential of application in the most varied industrial sectors. The same characteristics that make nanoparticles interesting from the point of view of the technological application, these may be undesirable when released into the environment. The small size of nanoparticles facilitates their diffusion and transport in the atmosphere, water, and soil and facilitates the entry and accumulation of nanoparticles in living cells. The main objective of this study is to evaluate the environmental regulatory process of nanomaterials in the Brazilian scenario. Three specific objectives were outlined. The first is to carry out a global scientometric study, in a research platform, with the purpose of identifying the main lines of study of nanomaterials in the environmental area. The second is to verify how environmental agencies in other countries have been working on this issue by means of a bibliographic review. And the third is to carry out an assessment of the Brazilian Nanotechnology Draft Law 6741/2013 with the state environmental agencies. This last one has the aim of identifying the knowledge of the subject by the environmental agencies and necessary resources available in the country for the implementation of the Policy. A questionnaire will be used as a tool for this evaluation to identify the operational elements and build indicators through the Environment of Evaluation Application, a computational application developed for the development of questionnaires. At the end will be verified the need to propose changes in the Draft Law of the National Nanotechnology Policy. Initial studies, in relation to the first specific objective, have already identified that Brazil stands out in the production of scientific publications in the area of nanotechnology, although the minority is in studies focused on environmental impact studies. Regarding the general panorama of other countries, some findings have also been raised. The United States has included the nanoform of the substances in an existing program in the EPA (Environmental Protection Agency), the TSCA (Toxic Substances Control Act). The European Union issued a draft of a document amending Regulation 1907/2006 of the European Parliament and Council to cover the nanoform of substances. Both programs are based on the study and identification of environmental risks associated with nanomaterials taking into consideration the product life cycle. In relation to Brazil, regarding the third specific objective, it is notable that the country does not have any regulations applicable to nanostructures, although there is a Draft Law in progress. In this document, it is possible to identify some requirements related to the environment, such as environmental inspection and licensing; industrial waste management; notification of accidents and application of sanctions. However, it is not known if these requirements are sufficient for the prevention of environmental impacts and if national environmental agencies will know how to apply them correctly. This study intends to serve as a basis for future actions regarding environmental management applied to the use of nanotechnology in Brazil.Keywords: environment; management; nanotecnology; politics
Procedia PDF Downloads 122275 Synergy Surface Modification for High Performance Li-Rich Cathode
Authors: Aipeng Zhu, Yun Zhang
Abstract:
The growing grievous environment problems together with the exhaustion of energy resources put urgent demands for developing high energy density. Considering the factors including capacity, resource and environment, Manganese-based lithium-rich layer-structured cathode materials xLi₂MnO₃⋅(1-x)LiMO₂ (M = Ni, Co, Mn, and other metals) are drawing increasing attention due to their high reversible capacities, high discharge potentials, and low cost. They are expected to be one type of the most promising cathode materials for the next-generation Li-ion batteries (LIBs) with higher energy densities. Unfortunately, their commercial applications are hindered with crucial drawbacks such as poor rate performance, limited cycle life and continuous falling of the discharge potential. With decades of extensive studies, significant achievements have been obtained in improving their cyclability and rate performances, but they cannot meet the requirement of commercial utilization till now. One major problem for lithium-rich layer-structured cathode materials (LLOs) is the side reaction during cycling, which leads to severe surface degradation. In this process, the metal ions can dissolve in the electrolyte, and the surface phase change can hinder the intercalation/deintercalation of Li ions and resulting in low capacity retention and low working voltage. To optimize the LLOs cathode material, the surface coating is an efficient method. Considering the price and stability, Al₂O₃ was used as a coating material in the research. Meanwhile, due to the low initial Coulombic efficiency (ICE), the pristine LLOs was pretreated by KMnO₄ to increase the ICE. The precursor was prepared by a facile coprecipitation method. The as-prepared precursor was then thoroughly mixed with Li₂CO₃ and calcined in air at 500℃ for 5h and 900℃ for 12h to produce Li₁.₂[Ni₀.₂Mn₀.₆]O₂ (LNMO). The LNMO was then put into 0.1ml/g KMnO₄ solution stirring for 3h. The resultant was filtered and washed with water, and dried in an oven. The LLOs obtained was dispersed in Al(NO₃)₃ solution. The mixture was lyophilized to confer the Al(NO₃)₃ was uniformly coated on LLOs. After lyophilization, the LLOs was calcined at 500℃ for 3h to obtain LNMO@LMO@ALO. The working electrodes were prepared by casting the mixture of active material, acetylene black, and binder (polyvinglidene fluoride) dissolved in N-methyl-2-pyrrolidone with a mass ratio of 80: 15: 5 onto an aluminum foil. The electrochemical performance tests showed that the multiple surface modified materials had a higher initial Coulombic efficiency (84%) and better capacity retention (91% after 100 cycles) compared with that of pristine LNMO (76% and 80%, respectively). The modified material suggests that the KMnO₄ pretreat and Al₂O₃ coating can increase the ICE and cycling stability.Keywords: Li-rich materials, surface coating, lithium ion batteries, Al₂O₃
Procedia PDF Downloads 133274 The Impact of Artificial Intelligence on Food Industry
Authors: George Hanna Abdelmelek Henien
Abstract:
Quality and safety issues are common in Ethiopia's food processing industry, which can negatively impact consumers' health and livelihoods. The country is known for its various agricultural products that are important to the economy. However, food quality and safety policies and management practices in the food processing industry have led to many health problems, foodborne illnesses and economic losses. This article aims to show the causes and consequences of food safety and quality problems in the food processing industry in Ethiopia and discuss possible solutions to solve them. One of the main reasons for food quality and safety in Ethiopia's food processing industry is the lack of adequate regulation and enforcement mechanisms. Inadequate food safety and quality policies have led to inefficiencies in food production. Additionally, the failure to monitor and enforce existing regulations has created a good opportunity for unscrupulous companies to engage in harmful practices that endanger the lives of citizens. The impact on food quality and safety is significant due to loss of life, high medical costs, and loss of consumer confidence in the food processing industry. Foodborne diseases such as diarrhoea, typhoid and cholera are common in Ethiopia, and food quality and safety play an important role in . Additionally, food recalls due to contamination or contamination often cause significant economic losses in the food processing industry. To solve these problems, the Ethiopian government began taking measures to improve food quality and safety in the food processing industry. One of the most prominent initiatives is the Ethiopian Food and Drug Administration (EFDA), which was established in 2010 to monitor and control the quality and safety of food and beverage products in the country. EFDA has implemented many measures to improve food safety, such as carrying out routine inspections, monitoring the import of food products and implementing labeling requirements. Another solution that can improve food quality and safety in the food processing industry in Ethiopia is the implementation of food safety management system (FSMS). FSMS is a set of procedures and policies designed to identify, assess and control food safety risks during food processing. Implementing a FSMS can help companies in the food processing industry identify and address potential risks before they harm consumers. Additionally, implementing an FSMS can help companies comply with current safety and security regulations. Consequently, improving food safety policy and management system in Ethiopia's food processing industry is important to protect people's health and improve the country's economy. . Addressing the root causes of food quality and safety and implementing practical solutions that can help improve the overall food safety and quality in the country, such as establishing regulatory bodies and implementing food management systems.Keywords: food quality, food safety, policy, management system, food processing industry food traceability, industry 4.0, internet of things, block chain, best worst method, marcos
Procedia PDF Downloads 63273 Flexible Design Solutions for Complex Free form Geometries Aimed to Optimize Performances and Resources Consumption
Authors: Vlad Andrei Raducanu, Mariana Lucia Angelescu, Ion Cinca, Vasile Danut Cojocaru, Doina Raducanu
Abstract:
By using smart digital tools, such as generative design (GD) and digital fabrication (DF), problems of high actuality concerning resources optimization (materials, energy, time) can be solved and applications or products of free-form type can be created. In the new digital technology materials are active, designed in response to a set of performance requirements, which impose a total rethinking of old material practices. The article presents the design procedure key steps of a free-form architectural object - a column type one with connections to get an adaptive 3D surface, by using the parametric design methodology and by exploiting the properties of conventional metallic materials. In parametric design the form of the created object or space is shaped by varying the parameters values and relationships between the forms are described by mathematical equations. Digital parametric design is based on specific procedures, as shape grammars, Lindenmayer - systems, cellular automata, genetic algorithms or swarm intelligence, each of these procedures having limitations which make them applicable only in certain cases. In the paper the design process stages and the shape grammar type algorithm are presented. The generative design process relies on two basic principles: the modeling principle and the generative principle. The generative method is based on a form finding process, by creating many 3D spatial forms, using an algorithm conceived in order to apply its generating logic onto different input geometry. Once the algorithm is realized, it can be applied repeatedly to generate the geometry for a number of different input surfaces. The generated configurations are then analyzed through a technical or aesthetic selection criterion and finally the optimal solution is selected. Endless range of generative capacity of codes and algorithms used in digital design offers various conceptual possibilities and optimal solutions for both technical and environmental increasing demands of building industry and architecture. Constructions or spaces generated by parametric design can be specifically tuned, in order to meet certain technical or aesthetical requirements. The proposed approach has direct applicability in sustainable architecture, offering important potential economic advantages, a flexible design (which can be changed until the end of the design process) and unique geometric models of high performance.Keywords: parametric design, algorithmic procedures, free-form architectural object, sustainable architecture
Procedia PDF Downloads 377272 How Holton’s Thematic Analysis Can Help to Understand Why Fred Hoyle Never Accepted Big Bang Cosmology
Authors: Joao Barbosa
Abstract:
After an intense dispute between the big bang cosmology and its big rival, the steady-state cosmology, some important experimental observations, such as the determination of helium abundance in the universe and the discovery of the cosmic background radiation in the 1960s were decisive for the progressive and wide acceptance of big bang cosmology and the inevitable abandonment of steady-state cosmology. But, despite solid theoretical support and those solid experimental observations favorable to big bang cosmology, Fred Hoyle, one of the proponents of the steady-state and the main opponent of the idea of the big bang (which, paradoxically, himself he baptized), never gave up and continued to fight for the idea of a stationary (or quasi-stationary) universe until the end of his life, even after decades of widespread consensus around the big bang cosmology. We can try to understand this persistent attitude of Hoyle by applying Holton’s thematic analysis to cosmology. Holton recognizes in the scientific activity a dimension that, even unconscious or not assumed, is nevertheless very important in the work of scientists, in implicit articulation with the experimental and the theoretical dimensions of science. This is the thematic dimension, constituted by themata – concepts, methodologies, and hypotheses with a metaphysical, aesthetic, logical, or epistemological nature, associated both with the cultural context and the individual psychology of scientists. In practice, themata can be expressed through personal preferences and choices that guide the individual and collective work of scientists. Thematic analysis shows that big bang cosmology is mainly based on a set of themata consisting of evolution, finitude, life cycle, and change; the cosmology of the steady-state is based on opposite themata: steady-state, infinity, continuous existence, and constancy. The passionate controversy that these cosmological views carried out is part of an old cosmological opposition: the thematic opposition between an evolutionary view of the world (associated with Heraclitus) and a stationary view (associated with Parmenides). Personal preferences seem to have been important in this (thematic) controversy, and the thematic analysis that was developed shows that Hoyle is a very illustrative example of a life-long personal commitment to some themata, in this case to the opposite themata of the big bang cosmology. His struggle against the big bang idea was strongly based on philosophical and even religious reasons – which, in a certain sense and in a Holtonian perspective, is related to thematic preferences. In this personal and persistent struggle, Hoyle always refused the way how some experimental observations were considered decisive in favor of the big bang idea, arguing that the success of this idea is based on sociological and cultural prejudices. This Hoyle’s attitude is a personal thematic attitude, in which the acceptance or rejection of what is presented as proof or scientific fact is conditioned by themata: what is a proof or a scientific fact for one scientist is something yet to be established for another scientist who defends different or even opposites themata.Keywords: cosmology, experimental observations, fred hoyle, interpretation, life-long personal commitment, Themata
Procedia PDF Downloads 168