Search results for: population based research
134 Prevalence, Median Time, and Associated Factors with the Likelihood of Initial Antidepressant Change: A Cross-Sectional Study
Authors: Nervana Elbakary, Sami Ouanes, Sadaf Riaz, Oraib Abdallah, Islam Mahran, Noriya Al-Khuzaei, Yassin Eltorki
Abstract:
Major Depressive Disorder (MDD) requires therapeutic interventions during the initial month after being diagnosed for better disease outcomes. International guidelines recommend a duration of 4–12 weeks for an initial antidepressant (IAD) trial at an optimized dose to get a response. If depressive symptoms persist after this duration, guidelines recommend switching, augmenting, or combining strategies as the next step. Most patients with MDD in the mental health setting have been labeled incorrectly as treatment-resistant where in fact they have not been subjected to an adequate trial of guideline-recommended therapy. Premature discontinuation of IAD due to ineffectiveness can cause unfavorable consequences. Avoiding irrational practices such as subtherapeutic doses of IAD, premature switching between the ADs, and refraining from unjustified polypharmacy can help the disease to go into a remission phase We aimed to determine the prevalence and the patterns of strategies applied after an IAD was changed because of a suboptimal response as a primary outcome. Secondary outcomes included the median survival time on IAD before any change; and the predictors that were associated with IAD change. This was a retrospective cross- sectional study conducted in Mental Health Services in Qatar. A dataset between January 1, 2018, and December 31, 2019, was extracted from the electronic health records. Inclusion and exclusion criteria were defined and applied. The sample size was calculated to be at least 379 patients. Descriptive statistics were reported as frequencies and percentages, in addition, to mean and standard deviation. The median time of IAD to any change strategy was calculated using survival analysis. Associated predictors were examined using two unadjusted and adjusted cox regression models. A total of 487 patients met the inclusion criteria of the study. The average age for participants was 39.1 ± 12.3 years. Patients with first experience MDD episode 255 (52%) constituted a major part of our sample comparing to the relapse group 206(42%). About 431 (88%) of the patients had an occurrence of IAD change to any strategy before end of the study. Almost half of the sample (212 (49%); 95% CI [44–53%]) had their IAD changed less than or equal to 30 days. Switching was consistently more common than combination or augmentation at any timepoint. The median time to IAD change was 43 days with 95% CI [33.2–52.7]. Five independent variables (age, bothersome side effects, un-optimization of the dose before any change, comorbid anxiety, first onset episode) were significantly associated with the likelihood of IAD change in the unadjusted analysis. The factors statistically associated with higher hazard of IAD change in the adjusted analysis were: younger age, un-optimization of the IAD dose before any change, and comorbid anxiety. Because almost half of the patients in this study changed their IAD as early as within the first month, efforts to avoid treatment failure are needed to ensure patient-treatment targets are met. The findings of this study can have direct clinical guidance for health care professionals since an optimized, evidence-based use of AD medication can improve the clinical outcomes of patients with MDD; and also, to identify high-risk factors that could worsen the survival time on IAD such as young age and comorbid anxietyKeywords: initial antidepressant, dose optimization, major depressive disorder, comorbid anxiety, combination, augmentation, switching, premature discontinuation
Procedia PDF Downloads 150133 Investigation of Different Electrolyte Salts Effect on ZnO/MWCNT Anode Capacity in LIBs
Authors: Şeyma Dombaycıoğlu, Hilal Köse, Ali Osman Aydın, Hatem Akbulut
Abstract:
Rechargeable lithium ion batteries (LIBs) have been considered as one of the most attractive energy storage choices for laptop computers, electric vehicles and cellular phones owing to their high energy and power density. Compared with conventional carbonaceous materials, transition metal oxides (TMOs) have attracted great interests and stand out among versatile novel anode materials due to their high theoretical specific capacity, wide availability and good safety performance. ZnO, as an anode material for LIBs, has a high theoretical capacity of 978 mAh g-1, much higher than that of the conventional graphite anode (∼370 mAhg-1). However, several major problems such as poor cycleability, resulting from the severe volume expansion and contraction during the alloying-dealloying cycles with Li+ ions and the associated charge transfer process, the pulverization and the agglomeration of individual particles, which drastically reduces the total entrance/exit sites available for Li+ ions still hinder the practical use of ZnO powders as an anode material for LIBs. Therefore, a great deal of effort has been devoted to overcome these problems, and many methods have been developed. In most of these methods, it is claimed that carbon nanotubes (CNTs) will radically improve the performance of batteries, because their unique structure may especially enhance the kinetic properties of the electrodes and result in an extremely high specific charge compared with the theoretical limits of graphitic carbon. Due to outstanding properties of CNTs, MWCNT buckypaper substrate is considered a buffer material to prevent mechanical disintegration of anode material during the battery applications. As the bridge connecting the positive and negative electrodes, the electrolyte plays a critical role affecting the overall electrochemical performance of the cell including rate, capacity, durability and safety. Commercial electrolytes for Li-ion batteries normally consist of certain lithium salts and mixed organic linear and cyclic carbonate solvents. Most commonly, LiPF6 is attributed to its remarkable features including high solubility, good ionic conductivity, high dissociation constant and satisfactory electrochemical stability for commercial fabrication. Besides LiPF6, LiBF4 is well known as a conducting salt for LIBs. LiBF4 shows a better temperature stability in organic carbonate based solutions and less moisture sensitivity compared to LiPF6. In this work, free standing zinc oxide (ZnO) and multiwalled carbon nanotube (MWCNT) nanocomposite materials were prepared by a sol gel technique giving a high capacity anode material for lithium ion batteries. Electrolyte solutions (including 1 m Li+ ion) were prepared with different Li salts in glove box. For this purpose, LiPF6 and LiBF4 salts and also mixed of these salts were solved in EC:DMC solvents (1:1, w/w). CR2016 cells were assembled by using these prepared electrolyte solutions, the ZnO/MWCNT buckypaper nanocomposites as working electrodes, metallic lithium as cathode and polypropylene (PP) as separator. For investigating the effect of different Li salts on the electrochemical performance of ZnO/MWCNT nanocomposite anode material electrochemical tests were performed at room temperature.Keywords: anode, electrolyte, Li-ion battery, ZnO/MWCNT
Procedia PDF Downloads 231132 Degradation of Diclofenac in Water Using FeO-Based Catalytic Ozonation in a Modified Flotation Cell
Authors: Miguel A. Figueroa, José A. Lara-Ramos, Miguel A. Mueses
Abstract:
Pharmaceutical residues are a section of emerging contaminants of anthropogenic origin that are present in a myriad of waters with which human beings interact daily and are starting to affect the ecosystem directly. Conventional waste-water treatment systems are not capable of degrading these pharmaceutical effluents because their designs cannot handle the intermediate products and biological effects occurring during its treatment. That is why it is necessary to hybridize conventional waste-water systems with non-conventional processes. In the specific case of an ozonation process, its efficiency highly depends on a perfect dispersion of ozone, long times of interaction of the gas-liquid phases and the size of the ozone bubbles formed through-out the reaction system. In order to increase the efficiency of these parameters, the use of a modified flotation cell has been proposed recently as a reactive system, which is used at an industrial level to facilitate the suspension of particles and spreading gas bubbles through the reactor volume at a high rate. The objective of the present work is the development of a mathematical model that can closely predict the kinetic rates of reactions taking place in the flotation cell at an experimental scale by means of identifying proper reaction mechanisms that take into account the modified chemical and hydrodynamic factors in the FeO-catalyzed Ozonation of Diclofenac aqueous solutions in a flotation cell. The methodology is comprised of three steps: an experimental phase where a modified flotation cell reactor is used to analyze the effects of ozone concentration and loading catalyst over the degradation of Diclofenac aqueous solutions. The performance is evaluated through an index of utilized ozone, which relates the amount of ozone supplied to the system per milligram of degraded pollutant. Next, a theoretical phase where the reaction mechanisms taking place during the experiments must be identified and proposed that details the multiple direct and indirect reactions the system goes through. Finally, a kinetic model is obtained that can mathematically represent the reaction mechanisms with adjustable parameters that can be fitted to the experimental results and give the model a proper physical meaning. The expected results are a robust reaction rate law that can simulate the improved results of Diclofenac mineralization on water using the modified flotation cell reactor. By means of this methodology, the following results were obtained: A robust reaction pathways mechanism showcasing the intermediates, free-radicals and products of the reaction, Optimal values of reaction rate constants that simulated Hatta numbers lower than 3 for the system modeled, degradation percentages of 100%, TOC (Total organic carbon) removal percentage of 69.9 only requiring an optimal value of FeO catalyst of 0.3 g/L. These results showed that a flotation cell could be used as a reactor in ozonation, catalytic ozonation and photocatalytic ozonation processes, since it produces high reaction rate constants and reduces mass transfer limitations (Ha > 3) by producing microbubbles and maintaining a good catalyst distribution.Keywords: advanced oxidation technologies, iron oxide, emergent contaminants, AOTS intensification
Procedia PDF Downloads 112131 Application of Flow Cytometry for Detection of Influence of Abiotic Stress on Plants
Authors: Dace Grauda, Inta Belogrudova, Alexei Katashev, Linda Lancere, Isaak Rashal
Abstract:
The goal of study was the elaboration of easy applicable flow cytometry method for detection of influence of abiotic stress factors on plants, which could be useful for detection of environmental stresses in urban areas. The lime tree Tillia vulgaris H. is a popular tree species used for urban landscaping in Europe and is one of the main species of street greenery in Riga, Latvia. Tree decline and low vitality has observed in the central part of Riga. For this reason lime trees were select as a model object for the investigation. During the period of end of June and beginning of July 12 samples from different urban environment locations as well as plant material from a greenhouse were collected. BD FACSJazz® cell sorter (BD Biosciences, USA) with flow cytometer function was used to test viability of plant cells. The method was based on changes of relative fluorescence intensity of cells in blue laser (488 nm) after influence of stress factors. SpheroTM rainbow calibration particles (3.0–3.4 μm, BD Biosciences, USA) in phosphate buffered saline (PBS) were used for calibration of flow cytometer. BD PharmingenTM PBS (BD Biosciences, USA) was used for flow cytometry assays. The mean fluorescence intensity information from the purified cell suspension samples was recorded. Preliminary, multiple gate sizes and shapes were tested to find one with the lowest CV. It was found that low CV can be obtained if only the densest part of plant cells forward scatter/side scatter profile is analysed because in this case plant cells are most similar in size and shape. The young pollen cells in one nucleus stage were found as the best for detection of influence of abiotic stress. For experiments only fresh plant material was used– the buds of Tillia vulgaris with diameter 2 mm. For the cell suspension (in vitro culture) establishment modified protocol of microspore culture was applied. The cells were suspended in the MS (Murashige and Skoog) medium. For imitation of dust of urban area SiO2 nanoparticles with concentration 0.001 g/ml were dissolved in distilled water. Into 10 ml of cell suspension 1 ml of SiO2 nanoparticles suspension was added, then cells were incubated in speed shaking regime for 1 and 3 hours. As a stress factor the irradiation of cells for 20 min by UV was used (Hamamatsu light source L9566-02A, L10852 lamp, A10014-50-0110), maximum relative intensity (100%) at 365 nm and at ~310 nm (75%). Before UV irradiation the suspension of cells were placed onto a thin layer on a filter paper disk (diameter 45 mm) in a Petri dish with solid MS media. Cells without treatment were used as a control. Experiments were performed at room temperature (23-25 °C). Using flow cytometer BS FACS Software cells plot was created to determine the densest part, which was later gated using oval-shaped gate. Gate included from 95 to 99% of all cells. To determine relative fluorescence of cells logarithmic fluorescence scale in arbitrary fluorescence units were used. 3x103 gated cells were analysed from the each sample. The significant differences were found among relative fluorescence of cells from different trees after treatment with SiO2 nanoparticles and UV irradiation in comparison with the control.Keywords: flow cytometry, fluorescence, SiO2 nanoparticles, UV irradiation
Procedia PDF Downloads 412130 Differential Expression Profile Analysis of DNA Repair Genes in Mycobacterium Leprae by qPCR
Authors: Mukul Sharma, Madhusmita Das, Sundeep Chaitanya Vedithi
Abstract:
Leprosy is a chronic human disease caused by Mycobacterium leprae, that cannot be cultured in vitro. Though treatable with multidrug therapy (MDT), recently, bacteria reported resistance to multiple antibiotics. Targeting DNA replication and repair pathways can serve as the foundation of developing new anti-leprosy drugs. Due to the absence of an axenic culture medium for the propagation of M. leprae, studying cellular processes, especially those belonging to DNA repair pathways, is challenging. Genomic understanding of M. Leprae harbors several protein-coding genes with no previously assigned function known as 'hypothetical proteins'. Here, we report identification and expression of known and hypothetical DNA repair genes from a human skin biopsy and mouse footpads that are involved in base excision repair, direct reversal repair, and SOS response. Initially, a bioinformatics approach was employed based on sequence similarity, identification of known protein domains to screen the hypothetical proteins in the genome of M. leprae, that are potentially related to DNA repair mechanisms. Before testing on clinical samples, pure stocks of bacterial reference DNA of M. leprae (NHDP63 strain) was used to construct standard graphs to validate and identify lower detection limit in the qPCR experiments. Primers were designed to amplify the respective transcripts, and PCR products of the predicted size were obtained. Later, excisional skin biopsies of newly diagnosed untreated, treated, and drug resistance leprosy cases from SIHR & LC hospital, Vellore, India were taken for the extraction of RNA. To determine the presence of the predicted transcripts, cDNA was generated from M. leprae mRNA isolated from clinically confirmed leprosy skin biopsy specimen across all the study groups. Melting curve analysis was performed to determine the integrity of the amplification and to rule out primer‑dimer formation. The Ct values obtained from qPCR were fitted to standard curve to determine transcript copy number. Same procedure was applied for M. leprae extracted after processing a footpad of nude mice of drug sensitive and drug resistant strains. 16S rRNA was used as positive control. Of all the 16 genes involved in BER, DR, and SOS, differential expression pattern of the genes was observed in terms of Ct values when compared to human samples; this was because of the different host and its immune response. However, no drastic variation in gene expression levels was observed in human samples except the nth gene. The higher expression of nth gene could be because of the mutations that may be associated with sequence diversity and drug resistance which suggests an important role in the repair mechanism and remains to be explored. In both human and mouse samples, SOS system – lexA and RecA, and BER genes AlkB and Ogt were expressing efficiently to deal with possible DNA damage. Together, the results of the present study suggest that DNA repair genes are constitutively expressed and may provide a reference for molecular diagnosis, therapeutic target selection, determination of treatment and prognostic judgment in M. leprae pathogenesis.Keywords: DNA repair, human biopsy, hypothetical proteins, mouse footpads, Mycobacterium leprae, qPCR
Procedia PDF Downloads 103129 Broad Host Range Bacteriophage Cocktail for Reduction of Staphylococcus aureus as Potential Therapy for Atopic Dermatitis
Authors: Tamar Lin, Nufar Buchshtab, Yifat Elharar, Julian Nicenboim, Rotem Edgar, Iddo Weiner, Lior Zelcbuch, Ariel Cohen, Sharon Kredo-Russo, Inbar Gahali-Sass, Naomi Zak, Sailaja Puttagunta, Merav Bassan
Abstract:
Background: Atopic dermatitis (AD) is a chronic, relapsing inflammatory skin disorder that is characterized by dry skin and flares of eczematous lesions and intense pruritus. Multiple lines of evidence suggest that AD is associated with increased colonization by Staphylococcus aureus, which contributes to disease pathogenesis through the release of virulence factors that affect both keratinocytes and immune cells, leading to disruption of the skin barrier and immune cell dysfunction. The aim of the current study is to develop a bacteriophage-based product that specifically targets S. aureus. Methods: For the discovery of phage, environmental samples were screened on 118 S. aureus strains isolated from skin samples, followed by multiple enrichment steps. Natural phages were isolated, subjected to Next-generation Sequencing (NGS), and analyzed using proprietary bioinformatics tools for undesirable genes (toxins, antibiotic resistance genes, lysogeny potential), taxonomic classification, and purity. Phage host range was determined by an efficiency of plating (EOP) value above 0.1 and the ability of the cocktail to completely lyse liquid bacterial culture under different growth conditions (e.g., temperature, bacterial stage). Results: Sequencing analysis demonstrated that the 118 S. aureus clinical strains were distributed across the phylogenetic tree of all available Refseq S. aureus (~10,750 strains). Screening environmental samples on the S. aureus isolates resulted in the isolation of 50 lytic phages from different genera, including Silviavirus, Kayvirus, Podoviridae, and a novel unidentified phage. NGS sequencing confirmed the absence of toxic elements in the phages’ genomes. The host range of the individual phages, as measured by the efficiency of plating (EOP), ranged between 41% (48/118) to 79% (93/118). Host range studies in liquid culture revealed that a subset of the phages can infect a broad range of S. aureus strains in different metabolic states, including stationary state. Combining the single-phage EOP results of selected phages resulted in a broad host range cocktail which infected 92% (109/118) of the strains. When tested in vitro in a liquid infection assay, clearance was achieved in 87% (103/118) of the strains, with no evidence of phage resistance throughout the study (24 hours). A S. aureus host was identified that can be used for the production of all the phages in the cocktail at high titers suitable for large-scale manufacturing. This host was validated for the absence of contaminating prophages using advanced NGS methods combined with multiple production cycles. The phages are produced under optimized scale-up conditions and are being used for the development of a topical formulation (BX005) that may be administered to subjects with atopic dermatitis. Conclusions: A cocktail of natural phages targeting S. aureus was effective in reducing bacterial burden across multiple assays. Phage products may offer safe and effective steroid-sparing options for atopic dermatitis.Keywords: atopic dermatitis, bacteriophage cocktail, host range, Staphylococcus aureus
Procedia PDF Downloads 153128 Femicide in the News: Jewish and Arab Victims and Culprits in the Israeli Hebrew Media
Authors: Ina Filkobski, Eran Shor
Abstract:
This article explores how newspapers cover murder of women by family members and intimate partners. Three major Israeli newspapers were compared in order to analyse the coverage of Jewish and Arab victims and culprits and to examine whether and in what ways the media contribute to the construction of symbolic boundaries between minority and dominant social groups. A sample of some 459 articles that were published between 2013 and 2015 was studied using a systematic qualitative content analysis. Our findings suggest that the treatment of murder cases by the media varies according to the ethnicity of both victims and culprits. The murder of Jews by family members or intimate partners was framed as a shocking and unusual event, a result of the individual personality or pathology of the culprit. Conversely, when Arabs were the killers, murders were often explained by focusing on the culture of the ethnic group, described as traditional, violent, and patriarchal. In two-thirds of the cases in which Arabs were involved, so-called ‘honor killing’ or other cultural explanations were proposed as the motive for the murder. This was often the case even before a suspect was detected, while police investigation was at its very early stages, and often despite forceful denials from victims’ families. In case of Jewish culprits, more than half of the articles in our sample suggested mental disorder to explain the acts and cultural explanations were almost entirely absent. Beyond the emphasis on psychological vs. cultural explanations, newspaper articles also tend to provide much more detail about Jewish culprits than about Arabs. Such detailed examinations convey a desire to make sense of the event by understanding the supposedly unique and unorthodox nature of the killer. The detailed accounts were usually absent from the reports on Arab killers. Thus, even if reports do not explicitly offer cultural motivations for the murder, the fact that reports often remain laconic leaves people to draw their own conclusions, which would then be likely based on existing cognitive scripts and previous reports on family murders among Arabs. Such treatment contributes to the notion that Arab and Muslim cultures, religions, and nationalities are essentially misogynistic and adhere to norms of honor and shame that are radically different from those of modern societies, such as the Jewish-Israeli one. Murder within the family is one of the most dramatic occurrences in the social world, and in societies that see themselves as modern it is a taboo; an ultimate signifier of danger. We suggest that representations of murder provide a valuable prism for examining the construction of group boundaries. Our analysis, therefore, contributes to the scholarly effort to understand the creation and reinforcement of symbolic boundaries between ‘society’ and its ‘others’ by systematically tracing the media constructions of ‘otherness’. While our analysis focuses on Israel, studies on the United States, Canada, and various European countries with ethnically and racially heterogeneous populations, make it clear that the stigmatisation and exclusion of visible, religious, and language minorities are not unique to the Israeli case.Keywords: comparative study of media coverege of minority and majority groups, construction of symbolic group boundaries, murder of women by family members and intimate partners, Israel, Jews, Arabs
Procedia PDF Downloads 184127 From Modelled Design to Reality through Material and Machinery Lab and Field Tests: Porous Concrete Carparks at the Wanda Metropolitano Stadium in Madrid
Authors: Manuel de Pazos-Liano, Manuel Cifuentes-Antonio, Juan Fisac-Gozalo, Sara Perales-Momparler, Carlos Martinez-Montero
Abstract:
The first-ever game in the Wanda Metropolitano Stadium, the new home of the Club Atletico de Madrid, was played on September 16, 2017, thanks to the work of a multidisciplinary team that made it possible to combine urban development with sustainability goals. The new football ground sits on a 1.2 km² land owned by the city of Madrid. Its construction has dramatically increased the sealed area of the site (transforming the runoff coefficient from 0.35 to 0.9), and the surrounding sewer network has no capacity for that extra flow. As an alternative to enlarge the existing 2.5 m diameter pipes, it was decided to detain runoff on site by means of an integrated and durable infrastructure that would not blow up the construction cost nor represent a burden on the municipality’s maintenance tasks. Instead of the more conventional option of building a large concrete detention tank, the decision was taken on the use of pervious pavement on the 3013 car parking spaces for sub-surface water storage, a solution aligned with the city water ordinance and the Madrid + Natural project. Making the idea a reality, in only five months and during the summer season (which forced to pour the porous concrete only overnight), was a challenge never faced before in Spain, that required of innovation both at the material as well as the machinery side. The process consisted on: a) defining the characteristics required for the porous concrete (compressive strength of 15 N/mm2 and 20% voids); b) testing of different porous concrete dosages at the construction company laboratory; c) stablishing the cross section in order to provide structural strength and sufficient water detention capacity (20 cm porous concrete over a 5 cm 5/10 gravel, that sits on a 50 cm coarse 40/50 aggregate sub-base separated by a virgin fiber polypropylene geotextile fabric); d) hydraulic computer modelling (using the Full Hydrograph Method based on the Wallingford Procedure) to estimate design peak flows decrease (an average of 69% at the three car parking lots); e) use of a variety of machinery for the application of the porous concrete to achieve both structural strength and permeable surface (including an inverse rotating rolling imported from USA, and the so-called CMI, a sliding concrete paver used in the construction of motorways with rigid pavements); f) full-scale pilots and final construction testing by an accredited laboratory (pavement compressive strength average value of 15 N/mm2 and 0,0032 m/s permeability). The continuous testing and innovating construction process explained in detail within this article, allowed for a growing performance with time, finally proving the use of the CMI valid also for large porous car park applications. All this process resulted in a successful story that converts the Wanda Metropolitano Stadium into a great demonstration site that will help the application of the Spanish Royal Decree 638/2016 (it also counts with rainwater harvesting for grass irrigation).Keywords: construction machinery, permeable carpark, porous concrete, SUDS, sustainable develpoment
Procedia PDF Downloads 144126 Development of DEMO-FNS Hybrid Facility and Its Integration in Russian Nuclear Fuel Cycle
Authors: Yury S. Shpanskiy, Boris V. Kuteev
Abstract:
Development of a fusion-fission hybrid facility based on superconducting conventional tokamak DEMO-FNS runs in Russia since 2013. The main design goal is to reach the technical feasibility and outline prospects of industrial hybrid technologies providing the production of neutrons, fuel nuclides, tritium, high-temperature heat, electricity and subcritical transmutation in Fusion-Fission Hybrid Systems. The facility should operate in a steady-state mode at the fusion power of 40 MW and fission reactions of 400 MW. Major tokamak parameters are the following: major radius R=3.2 m, minor radius a=1.0 m, elongation 2.1, triangularity 0.5. The design provides the neutron wall loading of ~0.2 MW/m², the lifetime neutron fluence of ~2 MWa/m², with the surface area of the active cores and tritium breeding blanket ~100 m². Core plasma modelling showed that the neutron yield ~10¹⁹ n/s is maximal if the tritium/deuterium density ratio is 1.5-2.3. The design of the electromagnetic system (EMS) defined its basic parameters, accounting for the coils strength and stability, and identified the most problematic nodes in the toroidal field coils and the central solenoid. The EMS generates toroidal, poloidal and correcting magnetic fields necessary for the plasma shaping and confinement inside the vacuum vessel. EMC consists of eighteen superconducting toroidal field coils, eight poloidal field coils, five sections of a central solenoid, correction coils, in-vessel coils for vertical plasma control. Supporting structures, the thermal shield, and the cryostat maintain its operation. EMS operates with the pulse duration of up to 5000 hours at the plasma current up to 5 MA. The vacuum vessel (VV) is an all-welded two-layer toroidal shell placed inside the EMS. The free space between the vessel shells is filled with water and boron steel plates, which form the neutron protection of the EMS. The VV-volume is 265 m³, its mass with manifolds is 1800 tons. The nuclear blanket of DEMO-FNS facility was designed to provide functions of minor actinides transmutation, tritium production and enrichment of spent nuclear fuel. The vertical overloading of the subcritical active cores with MA was chosen as prospective. Analysis of the device neutronics and the hybrid blanket thermal-hydraulic characteristics has been performed for the system with functions covering transmutation of minor actinides, production of tritium and enrichment of spent nuclear fuel. A study of FNS facilities role in the Russian closed nuclear fuel cycle was performed. It showed that during ~100 years of operation three FNS facilities with fission power of 3 GW controlled by fusion neutron source with power of 40 MW can burn 98 tons of minor actinides and 198 tons of Pu-239 can be produced for startup loading of 20 fast reactors. Instead of Pu-239, up to 25 kg of tritium per year may be produced for startup of fusion reactors using blocks with lithium orthosilicate instead of fissile breeder blankets.Keywords: fusion-fission hybrid system, conventional tokamak, superconducting electromagnetic system, two-layer vacuum vessel, subcritical active cores, nuclear fuel cycle
Procedia PDF Downloads 147125 Petrogenetic Model of Formation of Orthoclase Gabbro of the Dzirula Crystalline Massif, the Caucasus
Authors: David Shengelia, Tamara Tsutsunava, Manana Togonidze, Giorgi Chichinadze, Giorgi Beridze
Abstract:
Orthoclase gabbro intrusive exposes in the Eastern part of the Dzirula crystalline massif of the Central Transcaucasian microcontinent. It is intruded in the Baikal quartz-diorite gneisses as a stock-like body. The intrusive is characterized by heterogeneity of rock composition: variability of mineral content and irregular distribution of rock-forming minerals. The rocks are represented by pyroxenites, gabbro-pyroxenites and gabbros of different composition – K-feldspar, pyroxene-hornblende and biotite bearing varieties. Scientific views on the genesis and age of the orthoclase gabbro intrusive are considerably different. Based on the long-term pertogeochemical and geochronological investigations of the intrusive with such an extraordinary composition the authors came to the following conclusions. According to geological and geophysical data, it is stated that in the Saurian orogeny horizontal tectonic layering of the Earth’s crust of the Central Transcaucasian microcontinent took place. That is precisely this fact that explains the formation of the orthoclase gabbro intrusive. During the tectonic doubling of the Earth’s crust of the mentioned microcontinent thick tectonic nappes of mafic and sialic layers overlap the sialic basement (‘inversion’ layer). The initial magma of the intrusive was of high-temperature basite-ultrabasite composition, crystallization products of which are pyroxenites and gabbro-pyroxenites. Petrochemical data of the magma attest to its formation in the Upper mantle and partially in the ‘crustal astenolayer’. Then, a newly formed overheated dry magma with phenocrysts of clinopyrocxene and basic plagioclase intruded into the ‘inversion’ layer. From the new medium it was enriched by the volatile components causing the selective melting and as a result the formation of leucocratic quartz-feldspar material. At the same time in the basic magma intensive transformation of pyroxene to hornblende was going on. The basic magma partially mixed with the newly formed acid magma. These different magmas intruded first into the allochthonous basite layer without its significant transformation and then into the upper sialic layer and crystallized here at a depth of 7-10 km. By petrochemical data the newly formed leucocratic granite magma belongs to the S type granites, but the above mentioned mixed magma – to H (hybrid) type. During the final stage of magmatic processes the gabbroic rocks impregnated with high-temperature feldspar-bearing material forming anorthoclase or orthoclase. Thus, so called ‘orthoclase gabbro’ includes the rocks of various genetic groups: 1. protolith of gabbroic intrusive; 2. hybrid rock – K-feldspar gabbro and 3. leucocratic quartz-feldspar bearing rock. Petrochemical and geochemical data obtained from the hybrid gabbro and from the inrusive protolith differ from each other. For the identification of petrogenetic model of the orthoclase gabbro intrusive formation LA-ICP-MS- U-Pb zircon dating has been conducted in all three genetic types of gabbro. The zircon age of the protolith – mean 221.4±1.9 Ma and of hybrid K-feldspar gabbro – mean 221.9±2.2 Ma, records crystallization time of the intrusive, but the zircon age of quartz-feldspar bearing rocks – mean 323±2.9 Ma, as well as the inherited age (323±9, 329±8.3, 332±10 and 335±11 Ma) of hybrid K-feldspar gabbro corresponds to the formation age of Late Variscan granitoids widespread in the Dzirula crystalline massif.Keywords: The Caucasus, isotope dating, orthoclase-bearing gabbro, petrogenetic model
Procedia PDF Downloads 343124 The Design of a Phase I/II Trial of Neoadjuvant RT with Interdigitated Multiple Fractions of Lattice RT for Large High-grade Soft-Tissue Sarcoma
Authors: Georges F. Hatoum, Thomas H. Temple, Silvio Garcia, Xiaodong Wu
Abstract:
Soft Tissue Sarcomas (STS) represent a diverse group of malignancies with heterogeneous clinical and pathological features. The treatment of extremity STS aims to achieve optimal local tumor control, improved survival, and preservation of limb function. The National Comprehensive Cancer Network guidelines, based on the cumulated clinical data, recommend radiation therapy (RT) in conjunction with limb-sparing surgery for large, high-grade STS measuring greater than 5 cm in size. Such treatment strategy can offer a cure for patients. However, when recurrence occurs (in nearly half of patients), the prognosis is poor, with a median survival of 12 to 15 months and with only palliative treatment options available. The spatially-fractionated-radiotherapy (SFRT), with a long history of treating bulky tumors as a non-mainstream technique, has gained new attention in recent years due to its unconventional therapeutic effects, such as bystander/abscopal effects. Combining single fraction of GRID, the original form of SFRT, with conventional RT was shown to have marginally increased the rate of pathological necrosis, which has been recognized to have a positive correlation to overall survival. In an effort to consistently increase the pathological necrosis rate over 90%, multiple fractions of Lattice RT (LRT), a newer form of 3D SFRT, interdigitated with the standard RT as neoadjuvant therapy was conducted in a preliminary clinical setting. With favorable results of over 95% of necrosis rate in a small cohort of patients, a Phase I/II clinical study was proposed to exam the safety and feasibility of this new strategy. Herein the design of the clinical study is presented. In this single-arm, two-stage phase I/II clinical trial, the primary objectives are >80% of the patients achieving >90% tumor necrosis and to evaluation the toxicity; the secondary objectives are to evaluate the local control, disease free survival and overall survival (OS), as well as the correlation between clinical response and the relevant biomarkers. The study plans to accrue patients over a span of two years. All patient will be treated with the new neoadjuvant RT regimen, in which one of every five fractions of conventional RT is replaced by a LRT fraction with vertices receiving dose ≥10Gy while keeping the tumor periphery at or close to 2 Gy per fraction. Surgical removal of the tumor is planned to occur 6 to 8 weeks following the completion of radiation therapy. The study will employ a Pocock-style early stopping boundary to ensure patient safety. The patients will be followed and monitored for a period of five years. Despite much effort, the rarity of the disease has resulted in limited novel therapeutic breakthroughs. Although a higher rate of treatment-induced tumor necrosis has been associated with improved OS, with the current techniques, only 20% of patients with large, high-grade tumors achieve a tumor necrosis rate exceeding 50%. If this new neoadjuvant strategy is proven effective, an appreciable improvement in clinical outcome without added toxicity can be anticipated. Due to the rarity of the disease, it is hoped that such study could be orchestrated in a multi-institutional setting.Keywords: lattice RT, necrosis, SFRT, soft tissue sarcoma
Procedia PDF Downloads 60123 Analysis of Short Counter-Flow Heat Exchanger (SCFHE) Using Non-Circular Micro-Tubes Operated on Water-CuO Nanofluid
Authors: Avdhesh K. Sharma
Abstract:
Key, in the development of energy-efficient micro-scale heat exchanger devices, is to select large heat transfer surface to volume ratio without much expanse on re-circulated pumps. The increased interest in short heat exchanger (SHE) is due to accessibility of advanced technologies for manufacturing of micro-tubes in range of 1 micron m - 1 mm. Such SHE using micro-tubes are highly effective for high flux heat transfer technologies. Nanofluids, are used to enhance the thermal conductivity of re-circulated coolant and thus enhances heat transfer rate further. Higher viscosity associated with nanofluid expands more pumping power. Thus, there is a trade-off between heat transfer rate and pressure drop with geometry of micro-tubes. Herein, a novel design of short counter flow heat exchanger (SCFHE) using non-circular micro-tubes flooded with CuO-water nanofluid is conceptualized by varying the ratio of surface area to cross-sectional area of micro-tubes. A framework for comparative analysis of SCFHE using micro-tubes non-circular shape flooded by CuO-water nanofluid is presented. In SCFHE concept, micro-tubes having various geometrical shapes (viz., triangular, rectangular and trapezoidal) has been arranged row-wise to facilitate two aspects: (1) allowing easy flow distribution for cold and hot stream, and (2) maximizing the thermal interactions with neighboring channels. Adequate distribution of rows for cold and hot flow streams enables above two aspects. For comparative analysis, a specific volume or cross-section area is assigned to each elemental cell (which includes flow area and area corresponds to half wall thickness). A specific volume or cross-section area is assumed to be constant for each elemental cell (which includes flow area and half wall thickness area) and variation in surface area is allowed by selecting different geometry of micro-tubes in SCFHE. Effective thermal conductivity model for CuO-water nanofluid has been adopted, while the viscosity values for water based nanofluids are obtained empirically. Correlations for Nusselt number (Nu) and Poiseuille number (Po) for micro-tubes have been derived or adopted. Entrance effect is accounted for. Thermal and hydrodynamic performances of SCFHE are defined in terms of effectiveness and pressure drop or pumping power, respectively. For defining the overall performance index of SCFHE, two links are employed. First one relates heat transfer between the fluid streams q and pumping power PP as (=qj/PPj); while another link relates effectiveness eff and pressure drop dP as (=effj/dPj). For analysis, the inlet temperatures of hot and cold streams are varied in usual range of 20dC-65dC. Fully turbulent regime is seldom encountered in micro-tubes and transition of flow regime occurs much early (i.e., ~Re=1000). Thus, Re is fixed at 900, however, the uncertainty in Re due to addition of nanoparticles in base fluid is quantified by averaging of Re. Moreover, for minimizing error, volumetric concentration is limited to range 0% to ≤4% only. Such framework may be helpful in utilizing maximum peripheral surface area of SCFHE without any serious severity on pumping power and towards developing advanced short heat exchangers.Keywords: CuO-water nanofluid, non-circular micro-tubes, performance index, short counter flow heat exchanger
Procedia PDF Downloads 213122 Exploring Behavioural Biases among Indian Investors: A Qualitative Inquiry
Authors: Satish Kumar, Nisha Goyal
Abstract:
In the stock market, individual investors exhibit different kinds of behaviour. Traditional finance is built on the notion of 'homo economics', which states that humans always make perfectly rational choices to maximize their wealth and minimize risk. That is, traditional finance has concern for how investors should behave rather than how actual investors are behaving. Behavioural finance provides the explanation for this phenomenon. Although finance has been studied for thousands of years, behavioural finance is an emerging field that combines the behavioural or psychological aspects with conventional economic and financial theories to provide explanations on how emotions and cognitive factors influence investors’ behaviours. These emotions and cognitive factors are known as behavioural biases. Because of these biases, investors make irrational investment decisions. Besides, the emotional and cognitive factors, the social influence of media as well as friends, relatives and colleagues also affect investment decisions. Psychological factors influence individual investors’ investment decision making, but few studies have used qualitative methods to understand these factors. The aim of this study is to explore the behavioural factors or biases that affect individuals’ investment decision making. For the purpose of this exploratory study, an in-depth interview method was used because it provides much more exhaustive information and a relaxed atmosphere in which people feel more comfortable to provide information. Twenty investment advisors having a minimum 5 years’ experience in securities firms were interviewed. In this study, thematic content analysis was used to analyse interview transcripts. Thematic content analysis process involves analysis of transcripts, coding and identification of themes from data. Based on the analysis we categorized the statements of advisors into various themes. Past market returns and volatility; preference for safe returns; tendency to believe they are better than others; tendency to divide their money into different accounts/assets; tendency to hold on to loss-making assets; preference to invest in familiar securities; tendency to believe that past events were predictable; tendency to rely on the reference point; tendency to rely on other sources of information; tendency to have regret for making past decisions; tendency to have more sensitivity towards losses than gains; tendency to rely on own skills; tendency to buy rising stocks with the expectation that this rise will continue etc. are some of the major concerns showed by experts about investors. The findings of the study revealed 13 biases such as overconfidence bias, disposition effect, familiarity bias, framing effect, anchoring bias, availability bias, self-attribution bias, representativeness, mental accounting, hindsight bias, regret aversion, loss aversion and herding bias/media biases present in Indian investors. These biases have a negative connotation because they produce a distortion in the calculation of an outcome. These biases are classified under three categories such as cognitive errors, emotional biases and social interaction. The findings of this study may assist both financial service providers and researchers to understand the various psychological biases of individual investors in investment decision making. Additionally, individual investors will also be aware of the behavioural biases that will aid them to make sensible and efficient investment decisions.Keywords: financial advisors, individual investors, investment decisions, psychological biases, qualitative thematic content analysis
Procedia PDF Downloads 169121 Human-Carnivore Interaction: Patterns, Causes and Perceptions of Local Herders of Hoper Valley in Central Karakoram National Park, Pakistan
Authors: Saeed Abbas, Rahilla Tabassum, Haider Abbas, Babar Khan, Shahid Hussain, Muhammad Zafar Khan, Fazal Karim, Yawar Abbas, Rizwan Karim
Abstract:
Human–carnivore conflict is considered to be a major conservation and rural livelihood concern because many carnivore species have been heavily victimized due to elevated conflict levels with communities. Like other snow leopard range countries, this situation prevails in Pakistan, where WWF is currently working under Asia High Mountain Project (AHMP) in Gilgit-Baltistan of Pakistan. To mitigate such conflicts requires a firm understanding of grazing and predation pattern including human-carnivore interaction. For this purpose we conducted a survey in Hoper valley (one of the AHMP project sites in Pakistan), during August, 2013 through a questionnaire based survey and unstructured interviews covering 647 households, permanently residing in the project area out of the total 900 households. The valley, spread over 409 km2 between 36°7'46" N and 74°49'2"E, at 2900m asl in Karakoram ranges is considered to be one of an important habitat of snow leopard and associated prey species such as Himalayan ibex. The valley is home of 8100 Brusho people (ancient tribe of Northern Pakistan) dependent on agro-pastoral livelihoods including farming and livestock rearing. The total number of livestock reported were (N=15,481) out of which 8346 (53.91%) were sheep, 3546 (22.91%) goats, 2193 (14.16%) cows, 903 (5.83%) yaks, 508 (3.28%) bulls, 28 (0.18%) donkeys, 27 (0.17%) zo/zomo (cross breed of yak and cow), and 4 (0.03%) horses. 83 percent respondent (n=542 households) confirmed loss of their livestock during the last one year July, 2012 to June, 2013 which account for 2246 (14.51%) animals. The major reason of livestock loss include predation by large carnivores such as snow leopards and wolf (1710, 76.14%) followed by diseases (536, 23.86%). Of the total predation cases snow leopard is suspected to kill 1478 animals (86.43%). Among livestock sheep were found to be the major prey of snow leopard (810, 55%) followed by goats (484, 32.7%) cows (151, 10.21%), yaks (15, 1.015%), zo/zomo (7, 0.5%) and donkey (1, 0.07%). The reason for the mass depredation of sheep and goats is that they tend to browse on twigs of bushes and graze on soft grass near cliffs. They are also considered to be very active as compared to other species in moving quickly and covering more grazing area. This makes them more vulnerable to snow leopard attack. The majority (1283, 75%) of livestock killed by predators occurred during the warm season (May-September) in alpine and sub-alpine pastures and remaining (427, 25%) occurred in the winter season near settlements in valley. It was evident from the recent study that Snow leopard kills outside the pen were (1351, 79.76%) as compared to inside pen (359, 20.24%). Assessing the economic loss of livestock predation we found that the total loss of livestock predation in the study area is equal to PKR 11,230,000 (USD 105,797), which is about PRK 17, 357 (USD 163.51) per household per year. Economic loss incurred by the locals due to predation is quite significant where the average cash income per household per year is PKR 85,000 (USD 800.75).Keywords: carnivores, conflict, predation, livelihood, conservation, rural, snow leopard, livestock
Procedia PDF Downloads 347120 Synthesis and Properties of Poly(N-(sulfophenyl)aniline) Nanoflowers and Poly(N-(sulfophenyl)aniline) Nanofibers/Titanium dioxide Nanoparticles by Solid Phase Mechanochemical and Their Application in Hybrid Solar Cell
Authors: Mazaher Yarmohamadi-Vasel, Ali Reza Modarresi-Alama, Sahar Shabzendedara
Abstract:
Purpose/Objectives: The first purpose was synthesize Poly(N-(sulfophenyl)aniline) nanoflowers (PSANFLs) and Poly(N-(sulfophenyl)aniline) nanofibers/titanium dioxide nanoparticles ((PSANFs/TiO2NPs) by a solid-state mechano-chemical reaction and template-free method and use them in hybrid solar cell. Also, our second aim was to increase the solubility and the processability of conjugated nanomaterials in water through polar functionalized materials. poly[N-(4-sulfophenyl)aniline] is easily soluble in water because of the presence of polar groups of sulfonic acid in the polymer chain. Materials/Methods: Iron (III) chloride hexahydrate (FeCl3∙6H2O) were bought from Merck Millipore Company. Titanium oxide nanoparticles (TiO2, <20 nm, anatase) and Sodium diphenylamine-4-sulfonate (99%) were bought from Sigma-Aldrich Company. Titanium dioxide nanoparticles paste (PST-20T) was prepared from Sharifsolar Co. Conductive glasses coated with indium tin oxide (ITO) were bought from Xinyan Technology Co (China). For the first time we used the solid-state mechano-chemical reaction and template-free method to synthesize Poly(N-(sulfophenyl)aniline) nanoflowers. Moreover, for the first time we used the same technique to synthesize nanocomposite of Poly(N-(sulfophenyl)aniline) nanofibers and titanium dioxide nanoparticles (PSANFs/TiO2NPs) also for the first time this nanocomposite was synthesized. Examining the results of electrochemical calculations energy gap obtained by CV curves and UV–vis spectra demonstrate that PSANFs/TiO2NPs nanocomposite is a p-n type material that can be used in photovoltaic cells. Doctor blade method was used to creat films for three kinds of hybrid solar cells in terms of different patterns like ITO│TiO2NPs│Semiconductor sample│Al. In the following, hybrid photovoltaic cells in bilayer and bulk heterojunction structures were fabricated as ITO│TiO2NPs│PSANFLs│Al and ITO│TiO2NPs│PSANFs /TiO2NPs│Al, respectively. Fourier-transform infrared spectra, field emission scanning electron microscopy (FE-SEM), ultraviolet-visible spectra, cyclic voltammetry (CV) and electrical conductivity were the analysis that used to characterize the synthesized samples. Results and Conclusions: FE-SEM images clearly demonstrate that the morphology of the synthesized samples are nanostructured (nanoflowers and nanofibers). Electrochemical calculations of band gap from CV curves demonstrated that the forbidden band gap of the PSANFLs and PSANFs/TiO2NPs nanocomposite are 2.95 and 2.23 eV, respectively. I–V characteristics of hybrid solar cells and their power conversion efficiency (PCE) under 100 mWcm−2 irradiation (AM 1.5 global conditions) were measured that The PCE of the samples were 0.30 and 0.62%, respectively. At the end, all the results of solar cell analysis were discussed. To sum up, PSANFLs and PSANFLs/TiO2NPs were successfully synthesized by an affordable and straightforward mechanochemical reaction in solid-state under the green condition. The solubility and processability of the synthesized compounds have been improved compared to the previous work. We successfully fabricated hybrid photovoltaic cells of synthesized semiconductor nanostructured polymers and TiO2NPs as different architectures. We believe that the synthesized compounds can open inventive pathways for the development of other Poly(N-(sulfophenyl)aniline based hybrid materials (nanocomposites) proper for preparing new generation solar cells.Keywords: mechanochemical synthesis, PSANFLs, PSANFs/TiO2NPs, solar cell
Procedia PDF Downloads 67119 Particle Size Characteristics of Aerosol Jets Produced by a Low Powered E-Cigarette
Authors: Mohammad Shajid Rahman, Tarik Kaya, Edgar Matida
Abstract:
Electronic cigarettes, also known as e-cigarettes, may have become a tool to improve smoking cessation due to their ability to provide nicotine at a selected rate. Unlike traditional cigarettes, which produce toxic elements from tobacco combustion, e-cigarettes generate aerosols by heating a liquid solution (commonly a mixture of propylene glycol, vegetable glycerin, nicotine and some flavoring agents). However, caution still needs to be taken when using e-cigarettes due to the presence of addictive nicotine and some harmful substances produced from the heating process. Particle size distribution (PSD) and associated velocities generated by e-cigarettes have significant influence on aerosol deposition in different regions of human respiratory tracts. On another note, low actuation power is beneficial in aerosol generating devices since it exhibits a reduced emission of toxic chemicals. In case of e-cigarettes, lower heating powers can be considered as powers lower than 10 W compared to a wide range of powers (0.6 to 70.0 W) studied in literature. Due to the importance regarding inhalation risk reduction, deeper understanding of particle size characteristics of e-cigarettes demands thorough investigation. However, comprehensive study on PSD and velocities of e-cigarettes with a standard testing condition at relatively low heating powers is still lacking. The present study aims to measure particle number count and size distribution of undiluted aerosols of a latest fourth-generation e-cigarette at low powers, within 6.5 W using real-time particle counter (time-of-flight method). Also, temporal and spatial evolution of particle size and velocity distribution of aerosol jets are examined using phase Doppler anemometry (PDA) technique. To the authors’ best knowledge, application of PDA in e-cigarette aerosol measurement is rarely reported. In the present study, preliminary results about particle number count of undiluted aerosols measured by time-of-flight method depicted that an increase of heating power from 3.5 W to 6.5 W resulted in an enhanced asymmetricity in PSD, deviating from log-normal distribution. This can be considered as an artifact of rapid vaporization, condensation and coagulation processes on aerosols caused by higher heating power. A novel mathematical expression, combining exponential, Gaussian and polynomial (EGP) distributions, was proposed to describe asymmetric PSD successfully. The value of count median aerodynamic diameter and geometric standard deviation laid within a range of about 0.67 μm to 0.73 μm, and 1.32 to 1.43, respectively while the power varied from 3.5 W to 6.5 W. Laser Doppler velocimetry (LDV) and PDA measurement suggested a typical centerline streamwise mean velocity decay of aerosol jet along with a reduction of particle sizes. In the final submission, a thorough literature review, detailed description of experimental procedure and discussion of the results will be provided. Particle size and turbulent characteristics of aerosol jets will be further examined, analyzing arithmetic mean diameter, volumetric mean diameter, volume-based mean diameter, streamwise mean velocity and turbulence intensity. The present study has potential implications in PSD simulation and validation of aerosol dosimetry model, leading to improving related aerosol generating devices.Keywords: E-cigarette aerosol, laser doppler velocimetry, particle size distribution, particle velocity, phase Doppler anemometry
Procedia PDF Downloads 49118 Origin of the Eocene Volcanic Rocks in Muradlu Village, Azerbaijan Province, Northwest of Iran
Authors: A. Shahriari, M. Khalatbari Jafari, M. Faridi
Abstract:
Abstract The Muradlu volcanic area is located in Azerbaijan province, NW Iran. The studied area exposed in a vast region includes lesser Caucasus, Southeastern Turkey, and northwestern Iran, comprising Cenozoic volcanic and plutonic massifs. The geology of this extended region was under the influence of the Alpine-Himalayan orogeny. Cenozoic magmatic activities in this vast region evolved through the northward subduction of the Neotethyan subducted slab and subsequence collision of the Arabian and Eurasian plates. Based on stratigraphy and paleontology data, most of the volcanic activities in the Muradlu area occurred in the Eocene period. The Studied volcanic rocks overly late Cretaceous limestone with disconformity. The volcanic sequence includes thick epiclastic and hyaloclastite breccia at the base, laterally changed to pillow lava and continued by hyaloclastite and lave flows at the top of the series. The lava flows display different textures from megaporphyric-phyric to fluidal and microlithic textures. The studied samples comprise picrobasalt basalt, tephrite basanite, trachybasalt, basaltic trachyandesite, phonotephrite, tephrophonolite, trachyandesite, and trachyte in compositions. Some xenoliths with lherzolitic composition are found in picrobasalt. These xenoliths are made of olivine, cpx (diopside), and opx (enstatite), probably the remain of mantle origin. Some feldspathoid minerals such as sodalite presence in the phonotephrite confirm an alkaline trend. Two types of augite phenocrysts are found in picrobasalt, basalt and trachybasalt. The first types are shapeless, with disharmony zoning and sponge texture with reaction edges probably resulted from sodic magma, which is affected by a potassic magma. The second shows a glomerocryst shape. In discriminative diagrams, the volcanic rocks show alkaline-shoshonitic trends. They contain (0.5-7.7) k2O values and plot in the shoshonitic field. Most of the samples display transitional to potassic alkaline trends, and some samples reveal sodic alkaline trends. The transitional trend probably results from the mixing of the sodic alkaline and potassic magmas. The Rare Earth Elements (REE) patterns and spider diagrams indicate enrichment of Large-Ione Lithophile Element (LILE) and depletion of High Field Strength Elements (HFSE) relative to Heavy Rare Earth Elements (HREE). Enrichment of K, Rb, Sr, Ba, Zr, Th, and U and the enrichment of Light Rare Earth Elements (LREE) relative to Heavy Rare Earth Elements (HREE) indicate the effect of subduction-related fluids over the mantle source, which has been reported in the arc and continental collision zones. The studied samples show low Nb/La ratios. Our studied samples plot in the lithosphere and lithosphere-asthenosphere fields in the Nb/La versus La/Yb ratios diagram. These geochemical characters allow us to conclude that a lithospheric mantle source previously metasomatized by subduction components was the origin of the Muradlu volcanic rocks.Keywords: alkaline, asthenosphere, lherzolite, lithosphere, Muradlu, potassic, shoshonitic, sodic, volcanism
Procedia PDF Downloads 171117 Integrated Approach Towards Safe Wastewater Reuse in Moroccan Agriculture
Authors: Zakia Hbellaq
Abstract:
The Mediterranean region is considered a hotbed for climate change. Morocco is a semi-arid Mediterranean country facing water shortages and poor water quality. Its limited water resources limit the activities of various economic sectors. Most of Morocco's territory is in arid and desert areas. The potential water resources are estimated at 22 billion m3, which is equivalent to about 700 m3/inhabitant/year, and Morocco is in a state of structural water stress. Strictly speaking, the Kingdom of Morocco is one of the “very riskiest” countries, according to the World Resources Institute (WRI), which oversees the calculation of water stress risk in 167 countries. The surprising results of the Institute (WRI) rank Morocco as one of the riskiest countries in terms of water scarcity, ranking 3.89 out of 5, thus occupying the 23rd place out of a total of 167 countries, which indicates that the demand for water exceeds the available resources. Agriculture with a score of 3.89 is most affected by water stress from irrigation and places a heavy burden on the water table. Irrigation is an unavoidable technical need and has undeniable economic and social benefits given the available resources and climatic conditions. Irrigation, and therefore the agricultural sector, currently uses 86% of its water resources, while industry uses 5.5%. Although its development has undeniable economic and social benefits, it also contributes to the overfishing of most groundwater resources and the surprising decline in levels and deterioration of water quality in some aquifers. In this context, REUSE is one of the proposed solutions to reduce the water footprint of the agricultural sector and alleviate the shortage of water resources. Indeed, wastewater reuse, also known as REUSE (reuse of treated wastewater), is a step forward not only for the circular economy but also for the future, especially in the context of climate change. In particular, water reuse provides an alternative to existing water supplies and can be used to improve water security, sustainability, and resilience. However, given the introduction of organic trace pollutants or, organic micro-pollutants, the absorption of emerging contaminants, and decreasing salinity, it is possible to tackle innovative capabilities to overcome these problems and ensure food and health safety. To this end, attention will be paid to the adoption of an integrated and attractive approach, based on the reinforcement and optimization of the treatments proposed for the elimination of the organic load with particular attention to the elimination of emerging pollutants, to achieve this goal. , membrane bioreactors (MBR) as stand-alone technologies are not able to meet the requirements of WHO guidelines. They will be combined with heterogeneous Fenton processes using persulfate or hydrogen peroxide oxidants. Similarly, adsorption and filtration are applied as tertiary treatment In addition, the evaluation of crop performance in terms of yield, productivity, quality, and safety, through the optimization of Trichoderma sp strains that will be used to increase crop resistance to abiotic stresses, as well as the use of modern omics tools such as transcriptomic analysis using RNA sequencing and methylation to identify adaptive traits and associated genetic diversity that is tolerant/resistant/resilient to biotic and abiotic stresses. Hence, ensuring this approach will undoubtedly alleviate water scarcity and, likewise, increase the negative and harmful impact of wastewater irrigation on the condition of crops and the health of their consumers.Keywords: water scarcity, food security, irrigation, agricultural water footprint, reuse, emerging contaminants
Procedia PDF Downloads 160116 Generating Individualized Wildfire Risk Assessments Utilizing Multispectral Imagery and Geospatial Artificial Intelligence
Authors: Gus Calderon, Richard McCreight, Tammy Schwartz
Abstract:
Forensic analysis of community wildfire destruction in California has shown that reducing or removing flammable vegetation in proximity to buildings and structures is one of the most important wildfire defenses available to homeowners. State laws specify the requirements for homeowners to create and maintain defensible space around all structures. Unfortunately, this decades-long effort had limited success due to noncompliance and minimal enforcement. As a result, vulnerable communities continue to experience escalating human and economic costs along the wildland-urban interface (WUI). Quantifying vegetative fuels at both the community and parcel scale requires detailed imaging from an aircraft with remote sensing technology to reduce uncertainty. FireWatch has been delivering high spatial resolution (5” ground sample distance) wildfire hazard maps annually to the community of Rancho Santa Fe, CA, since 2019. FireWatch uses a multispectral imaging system mounted onboard an aircraft to create georeferenced orthomosaics and spectral vegetation index maps. Using proprietary algorithms, the vegetation type, condition, and proximity to structures are determined for 1,851 properties in the community. Secondary data processing combines object-based classification of vegetative fuels, assisted by machine learning, to prioritize mitigation strategies within the community. The remote sensing data for the 10 sq. mi. community is divided into parcels and sent to all homeowners in the form of defensible space maps and reports. Follow-up aerial surveys are performed annually using repeat station imaging of fixed GPS locations to address changes in defensible space, vegetation fuel cover, and condition over time. These maps and reports have increased wildfire awareness and mitigation efforts from 40% to over 85% among homeowners in Rancho Santa Fe. To assist homeowners fighting increasing insurance premiums and non-renewals, FireWatch has partnered with Black Swan Analytics, LLC, to leverage the multispectral imagery and increase homeowners’ understanding of wildfire risk drivers. For this study, a subsample of 100 parcels was selected to gain a comprehensive understanding of wildfire risk and the elements which can be mitigated. Geospatial data from FireWatch’s defensible space maps was combined with Black Swan’s patented approach using 39 other risk characteristics into a 4score Report. The 4score Report helps property owners understand risk sources and potential mitigation opportunities by assessing four categories of risk: Fuel sources, ignition sources, susceptibility to loss, and hazards to fire protection efforts (FISH). This study has shown that susceptibility to loss is the category residents and property owners must focus their efforts. The 4score Report also provides a tool to measure the impact of homeowner actions on risk levels over time. Resiliency is the only solution to breaking the cycle of community wildfire destruction and it starts with high-quality data and education.Keywords: defensible space, geospatial data, multispectral imaging, Rancho Santa Fe, susceptibility to loss, wildfire risk.
Procedia PDF Downloads 108115 Tensile and Bond Characterization of Basalt-Fabric Reinforced Alkali Activated Matrix
Authors: S. Candamano, A. Iorfida, F. Crea, A. Macario
Abstract:
Recently, basalt fabric reinforced cementitious composites (FRCM) have attracted great attention because they result to be effective in structural strengthening and cost/environment efficient. In this study, authors investigate their mechanical behavior when an inorganic matrix, belonging to the family of alkali-activated binders, is used. In particular, the matrix has been designed to contain high amounts of industrial by-products and waste, such as Ground Granulated Blast Furnace Slag (GGBFS) and Fly Ash. Fresh state properties, such as workability, mechanical properties and shrinkage behavior of the matrix have been measured, while microstructures and reaction products were analyzed by Scanning Electron Microscopy and X-Ray Diffractometry. Reinforcement is made up of a balanced, coated bidirectional fabric made out of basalt fibres and stainless steel micro-wire, with a mesh size of 8x8 mm and an equivalent design thickness equal to 0.064 mm. Mortars mixes have been prepared by maintaining constant the water/(reactive powders) and sand/(reactive powders) ratios at 0.53 and 2.7 respectively. An appropriate experimental campaign based on direct tensile tests on composite specimens and single-lap shear bond test on brickwork substrate has been thus carried out to investigate their mechanical behavior under tension, the stress-transfer mechanism and failure modes. Tensile tests were carried out on composite specimens of nominal dimensions equal to 500 mm x 50 mm x 10 mm, with 6 embedded rovings in the loading direction. Direct shear tests (DST) were carried out on brickwork substrate using an externally bonded basalt-FRCM composite strip 10 mm thick, 50 mm wide and a bonded length of 300 mm. Mortars exhibit, after 28 days of curing, an average compressive strength of 32 MPa and flexural strength of 5.5 MPa. Main hydration product is a poorly crystalline aluminium-modified calcium silicate hydrate (C-A-S-H) gel. The constitutive behavior of the composite has been identified by means of direct tensile tests, with response curves showing a tri-linear behavior. Test results indicate that the behavior is mainly governed by cracks development (II) and widening (III) up to failure. The ultimate tensile strength and strain were respectively σᵤ = 456 MPa and ɛᵤ= 2.20%. The tensile modulus of elasticity in stage III was EIII= 41 GPa. All single-lap shear test specimens failed due to composite debonding. It occurred at the internal fabric-to-matrix interface, and it was the result of a fracture of the matrix between the fibre bundles. For all specimens, transversal cracks were visible on the external surface of the composite and involved only the external matrix layer. This cracking appears when the interfacial shear stresses increase and slippage of the fabric at the internal matrix layer interface occurs. Since the external matrix layer is bonded to the reinforcement fabric, it translates with the slipped fabric. Average peak load around 945 N, peak stress around 308 MPa and global slip around 6 mm were measured. The preliminary test results allow affirming that Alkali-Activated Materials can be considered a potentially valid alternative to traditional mortars in designing FRCM composites.Keywords: Alkali-activated binders, Basalt-FRCM composites, direct shear tests, structural strengthening
Procedia PDF Downloads 129114 A Foucauldian Analysis of Postcolonial Hybridity in a Kuwaiti Novel
Authors: Annette Louise Dupont
Abstract:
Background and Introduction: Broadly defined, hybridity is a condition of racial and cultural ‘cross-pollination’ which arises as a result of contact between colonized and colonizer. It remains a highly contested concept in postcolonial studies as it is implicitly underpinned by colonial notions of ‘racial purity.’ While some postcolonial scholars argue that individuals exercise significant agency in the construction of their hybrid subjectivities, others underscore associated experiences of exclusion, marginalization, and alienation. Kuwait and the Philippines are among the most disparate of contemporary postcolonial states. While oil resources transformed the former British Mandate of Kuwait into one of the world’s richest countries, enduring poverty in the former US colony of the Philippines drives a global diaspora which produces multiple Filipino hybridities. Although more Filipinos work in the Arabian Gulf than in any other region of the world, scholarly and literary accounts of their experiences of hybridization in this region are relatively scarce when compared to those set in North America, Australia, Asia, and Europe. Study Aims and Significance: This paper aims to address this existing lacuna by investigating hybridity and other postcolonial themes in a novel by a Kuwaiti author which vividly portrays the lives of immigrants and citizens in Kuwait and which gives a rare voice and insight into the struggles of an Arab-Filipino and European-Filipina. Specifically, this paper explores the relationships between colonial discourses of ‘black’ and ‘white’ and postcolonial discourses pertaining to ‘brown’ Filipinos and ‘brown’ Arabs, in order to assess their impacts on the protagonists’ hybrid subjectivities. Methodology: Foucault’s notions of discourse not only provide a conceptual basis for analyzing the colonial ideology of Orientalism, but his theories related to the social exclusion of the ‘mad’ also elucidate the mechanisms by which power can operate to marginalize, alienate and subjectify the Other, therefore a Foucauldian lens is applied to the analysis of postcolonial themes and hybrid subjectivities portrayed in the novel. Findings: The study finds that Kuwaiti and Filipino discursive practices mirror those of former white colonialists and colonized black laborers and that these discursive practices combine with a former British colonial system of foreign labor sponsorship to create a form of governmentality in Kuwait which is based on exclusion and control. The novel’s rich social description and the reflections of the key protagonist and narrator suggest that such fiction has a significant role to play in highlighting the historical and cultural specificities of experiences of postcolonial hybridity in under-researched geographic, economic, social, and political settings. Whereas hybridity can appear abstract in scholarly accounts, the significance of literary accounts in which the lived experiences of hybrid protagonists are anchored to specific historical periods, places and discourses, is that contextual particularities are neither obscured nor dehistoricized. Conclusions: The application of Foucauldian theorizations of discourse, disciplinary, and biopower to the analysis of this Kuwaiti literary text serves to extend an understanding of the effects of contextually-specific discourses on hybrid Filipino subjectivities, as well as a knowledge of prevailing social dynamics in a little-researched postcolonial Arabian Gulf state.Keywords: Filipino, Foucault, hybridity, Kuwait
Procedia PDF Downloads 128113 The Impacts of New Digital Technology Transformation on Singapore Healthcare Sector: Case Study of a Public Hospital in Singapore from a Management Accounting Perspective
Authors: Junqi Zou
Abstract:
As one of the world’s most tech-ready countries, Singapore has initiated the Smart Nation plan to harness the full power and potential of digital technologies to transform the way people live and work, through the more efficient government and business processes, to make the economy more productive. The key evolutions of digital technology transformation in healthcare and the increasing deployment of Internet of Things (IoTs), Big Data, AI/cognitive, Robotic Process Automation (RPA), Electronic Health Record Systems (EHR), Electronic Medical Record Systems (EMR), Warehouse Management System (WMS in the most recent decade have significantly stepped up the move towards an information-driven healthcare ecosystem. The advances in information technology not only bring benefits to patients but also act as a key force in changing management accounting in healthcare sector. The aim of this study is to investigate the impacts of digital technology transformation on Singapore’s healthcare sector from a management accounting perspective. Adopting a Balanced Scorecard (BSC) analysis approach, this paper conducted an exploratory case study of a newly launched Singapore public hospital, which has been recognized as amongst the most digitally advanced healthcare facilities in Asia-Pacific region. Specifically, this study gains insights on how the new technology is changing healthcare organizations’ management accounting from four perspectives under the Balanced Scorecard approach, 1) Financial Perspective, 2) Customer (Patient) Perspective, 3) Internal Processes Perspective, and 4) Learning and Growth Perspective. Based on a thorough review of archival records from the government and public, and the interview reports with the hospital’s CIO, this study finds the improvements from all the four perspectives under the Balanced Scorecard framework as follows: 1) Learning and Growth Perspective: The Government (Ministry of Health) works with the hospital to open up multiple training pathways to health professionals that upgrade and develops new IT skills among the healthcare workforce to support the transformation of healthcare services. 2) Internal Process Perspective: The hospital achieved digital transformation through Project OneCare to integrate clinical, operational, and administrative information systems (e.g., EHR, EMR, WMS, EPIB, RTLS) that enable the seamless flow of data and the implementation of JIT system to help the hospital operate more effectively and efficiently. 3) Customer Perspective: The fully integrated EMR suite enhances the patient’s experiences by achieving the 5 Rights (Right Patient, Right Data, Right Device, Right Entry and Right Time). 4) Financial Perspective: Cost savings are achieved from improved inventory management and effective supply chain management. The use of process automation also results in a reduction of manpower costs and logistics cost. To summarize, these improvements identified under the Balanced Scorecard framework confirm the success of utilizing the integration of advanced ICT to enhance healthcare organization’s customer service, productivity efficiency, and cost savings. Moreover, the Big Data generated from this integrated EMR system can be particularly useful in aiding management control system to optimize decision making and strategic planning. To conclude, the new digital technology transformation has moved the usefulness of management accounting to both financial and non-financial dimensions with new heights in the area of healthcare management.Keywords: balanced scorecard, digital technology transformation, healthcare ecosystem, integrated information system
Procedia PDF Downloads 161112 Raman Spectral Fingerprints of Healthy and Cancerous Human Colorectal Tissues
Authors: Maria Karnachoriti, Ellas Spyratou, Dimitrios Lykidis, Maria Lambropoulou, Yiannis S. Raptis, Ioannis Seimenis, Efstathios P. Efstathopoulos, Athanassios G. Kontos
Abstract:
Colorectal cancer is the third most common cancer diagnosed in Europe, according to the latest incidence data provided by the World Health Organization (WHO), and early diagnosis has proved to be the key in reducing cancer-related mortality. In cases where surgical interventions are required for cancer treatment, the accurate discrimination between healthy and cancerous tissues is critical for the postoperative care of the patient. The current study focuses on the ex vivo handling of surgically excised colorectal specimens and the acquisition of their spectral fingerprints using Raman spectroscopy. Acquired data were analyzed in an effort to discriminate, in microscopic scale, between healthy and malignant margins. Raman spectroscopy is a spectroscopic technique with high detection sensitivity and spatial resolution of few micrometers. The spectral fingerprint which is produced during laser-tissue interaction is unique and characterizes the biostructure and its inflammatory or cancer state. Numerous published studies have demonstrated the potential of the technique as a tool for the discrimination between healthy and malignant tissues/cells either ex vivo or in vivo. However, the handling of the excised human specimens and the Raman measurement conditions remain challenging, unavoidably affecting measurement reliability and repeatability, as well as the technique’s overall accuracy and sensitivity. Therefore, tissue handling has to be optimized and standardized to ensure preservation of cell integrity and hydration level. Various strategies have been implemented in the past, including the use of balanced salt solutions, small humidifiers or pump-reservoir-pipette systems. In the current study, human colorectal specimens of 10X5 mm were collected from 5 patients up to now who underwent open surgery for colorectal cancer. A novel, non-toxic zinc-based fixative (Z7) was used for tissue preservation. Z7 demonstrates excellent protein preservation and protection against tissue autolysis. Micro-Raman spectra were recorded with a Renishaw Invia spectrometer from successive random 2 micrometers spots upon excitation at 785 nm to decrease fluorescent background and secure avoidance of tissue photodegradation. A temperature-controlled approach was adopted to stabilize the tissue at 2 °C, thus minimizing dehydration effects and consequent focus drift during measurement. A broad spectral range, 500-3200 cm-1,was covered with five consecutive full scans that lasted for 20 minutes in total. The average spectra were used for least square fitting analysis of the Raman modes.Subtle Raman differences were observed between normal and cancerous colorectal tissues mainly in the intensities of the 1556 cm-1 and 1628 cm-1 Raman modes which correspond to v(C=C) vibrations in porphyrins, as well as in the range of 2800-3000 cm-1 due to CH2 stretching of lipids and CH3 stretching of proteins. Raman spectra evaluation was supported by histological findings from twin specimens. This study demonstrates that Raman spectroscopy may constitute a promising tool for real-time verification of clear margins in colorectal cancer open surgery.Keywords: colorectal cancer, Raman spectroscopy, malignant margins, spectral fingerprints
Procedia PDF Downloads 91111 Dose Measurement in Veterinary Radiology Using Thermoluminescent Dosimeter
Authors: E. Saeedian, M. Shakerian, A. Zarif Sanayei, Z. Rakeb, F. N. Alizadeh, S. Sarshough, S. Sina
Abstract:
Radiological protection for plants and animals is an area of regulatory importance. Acute doses of 0.1 Gy/d (10 rad/d) or below are highly unlikely to produce permanent, measurable negative effects on populations or communities of plants or animals. The advancement of radio diagnostics for domestic animals, particularly dogs and cats, has gained popularity in veterinary medicine. As pets are considered to be members of the family worldwide, they are entitled to the same care and protection. It is important to have a system of radiological protection for nonhuman organisms that complies with the focus on human health as outlined in ICRP publication 19. The present study attempts to assess surface-skin entrance doses in small pets undergoing abdominal radio diagnostic procedures utilizing a direct measurements technique with a thermoluminescent dosimeter. These measurements allow the determination of the entrance skin dose (ESD) by calculating the amount of radiation absorbed by the skin during exposure. A group of Thirty TLD-100 dosimeters produced by Harshaw Company, each with repeatability greater than 95% and calibration using ¹³⁷Cs gamma source, were utilized to measure doses to ten small pets, including cats and dogs in the radiological department in a veterinary clinic in Shiraz, Iran. Radiological procedures were performed using a portable imaging unit (Philips Super M100, Philips Medical System, Germany) to acquire images of the abdomen; ten exams of abdomen images of different pets were monitored, measuring the thicknesses of the two projections (lateral and ventrodorsal) and the distance of the X-ray source from the surface of each pet during the exams. A group of two dosimeters was used for each pet which has been stacked on their skin on the abdomen region. The outcome of this study involved medical procedures with the same kVp, mAs, and nearly identical positions for different diagnostic X-ray procedures executed over a period of two months. The result showed the mean ESD value was 260.34±50.06 µGy due to the approximate size of pets. Based on the results, the ESD value is associated with animal size, and larger animals have higher values. If a procedure doesn't require repetition, the dose can be optimized. For smaller animals, the main challenge in veterinary radiology is the dose increase caused by repetitions, which is most noticeable in the ventrodorsal position due to the difficulty in immobilizing the animal. Animals are an area of regulatory importance. Acute doses of 0.1 Gy/d (10 rad/d) or below are highly unlikely to produce permanent, measurable negative effects on populations or communities of plants or animals. The advancement of radio diagnostics for domestic animals, particularly dogs and cats, has gained popularity in veterinary medicine. As pets are considered to be members of the family worldwide, they are entitled to the same care and protection. It is important to have a system of radiological protection for nonhuman organisms that complies with the focus on human health as outlined in ICRP publication 19. The present study attempts to assess surface-skin entrance doses in small pets undergoing abdominal radio diagnostic procedures utilizing direct measurements.Keywords: direct dose measuring, dosimetry, radiation protection, veterinary medicine
Procedia PDF Downloads 70110 Design Challenges for Severely Skewed Steel Bridges
Authors: Muna Mitchell, Akshay Parchure, Krishna Singaraju
Abstract:
There is an increasing need for medium- to long-span steel bridges with complex geometry due to site restrictions in developed areas. One of the solutions to grade separations in congested areas is to use longer spans on skewed supports that avoid at-grade obstructions limiting impacts to the foundation. Where vertical clearances are also a constraint, continuous steel girders can be used to reduce superstructure depths. Combining continuous long steel spans on severe skews can resolve the constraints at a cost. The behavior of skewed girders is challenging to analyze and design with subsequent complexity during fabrication and construction. As a part of a corridor improvement project, Walter P Moore designed two 1700-foot side-by-side bridges carrying four lanes of traffic in each direction over a railroad track. The bridges consist of prestressed concrete girder approach spans and three-span continuous steel plate girder units. The roadway design added complex geometry to the bridge with horizontal and vertical curves combined with superelevation transitions within the plate girder units. The substructure at the steel units was skewed approximately 56 degrees to satisfy the existing railroad right-of-way requirements. A horizontal point of curvature (PC) near the end of the steel units required the use flared girders and chorded slab edges. Due to the flared girder geometry, the cross-frame spacing in each bay is unique. Staggered cross frames were provided based on AASHTO LRFD and NCHRP guidelines for high skew steel bridges. Skewed steel bridges develop significant forces in the cross frames and rotation in the girder websdue to differential displacements along the girders under dead and live loads. In addition, under thermal loads, skewed steel bridges expand and contract not along the alignment parallel to the girders but along the diagonal connecting the acute corners, resulting in horizontal displacement both along and perpendicular to the girders. AASHTO LRFD recommends a 95 degree Fahrenheit temperature differential for the design of joints and bearings. The live load and the thermal loads resulted in significant horizontal forces and rotations in the bearings that necessitated the use of HLMR bearings. A unique bearing layout was selected to minimize the effect of thermal forces. The span length, width, skew, and roadway geometry at the bridges also required modular bridge joint systems (MBJS) with inverted-T bent caps to accommodate movement in the steel units. 2D and 3D finite element analysis models were developed to accurately determine the forces and rotations in the girders, cross frames, and bearings and to estimate thermal displacements at the joints. This paper covers the decision-making process for developing the framing plan, bearing configurations, joint type, and analysis models involved in the design of the high-skew three-span continuous steel plate girder bridges.Keywords: complex geometry, continuous steel plate girders, finite element structural analysis, high skew, HLMR bearings, modular joint
Procedia PDF Downloads 193109 Evaluation of Random Forest and Support Vector Machine Classification Performance for the Prediction of Early Multiple Sclerosis from Resting State FMRI Connectivity Data
Authors: V. Saccà, A. Sarica, F. Novellino, S. Barone, T. Tallarico, E. Filippelli, A. Granata, P. Valentino, A. Quattrone
Abstract:
The work aim was to evaluate how well Random Forest (RF) and Support Vector Machine (SVM) algorithms could support the early diagnosis of Multiple Sclerosis (MS) from resting-state functional connectivity data. In particular, we wanted to explore the ability in distinguishing between controls and patients of mean signals extracted from ICA components corresponding to 15 well-known networks. Eighteen patients with early-MS (mean-age 37.42±8.11, 9 females) were recruited according to McDonald and Polman, and matched for demographic variables with 19 healthy controls (mean-age 37.55±14.76, 10 females). MRI was acquired by a 3T scanner with 8-channel head coil: (a)whole-brain T1-weighted; (b)conventional T2-weighted; (c)resting-state functional MRI (rsFMRI), 200 volumes. Estimated total lesion load (ml) and number of lesions were calculated using LST-toolbox from the corrected T1 and FLAIR. All rsFMRIs were pre-processed using tools from the FMRIB's Software Library as follows: (1) discarding of the first 5 volumes to remove T1 equilibrium effects, (2) skull-stripping of images, (3) motion and slice-time correction, (4) denoising with high-pass temporal filter (128s), (5) spatial smoothing with a Gaussian kernel of FWHM 8mm. No statistical significant differences (t-test, p < 0.05) were found between the two groups in the mean Euclidian distance and the mean Euler angle. WM and CSF signal together with 6 motion parameters were regressed out from the time series. We applied an independent component analysis (ICA) with the GIFT-toolbox using the Infomax approach with number of components=21. Fifteen mean components were visually identified by two experts. The resulting z-score maps were thresholded and binarized to extract the mean signal of the 15 networks for each subject. Statistical and machine learning analysis were then conducted on this dataset composed of 37 rows (subjects) and 15 features (mean signal in the network) with R language. The dataset was randomly splitted into training (75%) and test sets and two different classifiers were trained: RF and RBF-SVM. We used the intrinsic feature selection of RF, based on the Gini index, and recursive feature elimination (rfe) for the SVM, to obtain a rank of the most predictive variables. Thus, we built two new classifiers only on the most important features and we evaluated the accuracies (with and without feature selection) on test-set. The classifiers, trained on all the features, showed very poor accuracies on training (RF:58.62%, SVM:65.52%) and test sets (RF:62.5%, SVM:50%). Interestingly, when feature selection by RF and rfe-SVM were performed, the most important variable was the sensori-motor network I in both cases. Indeed, with only this network, RF and SVM classifiers reached an accuracy of 87.5% on test-set. More interestingly, the only misclassified patient resulted to have the lowest value of lesion volume. We showed that, with two different classification algorithms and feature selection approaches, the best discriminant network between controls and early MS, was the sensori-motor I. Similar importance values were obtained for the sensori-motor II, cerebellum and working memory networks. These findings, in according to the early manifestation of motor/sensorial deficits in MS, could represent an encouraging step toward the translation to the clinical diagnosis and prognosis.Keywords: feature selection, machine learning, multiple sclerosis, random forest, support vector machine
Procedia PDF Downloads 240108 i2kit: A Tool for Immutable Infrastructure Deployments
Authors: Pablo Chico De Guzman, Cesar Sanchez
Abstract:
Microservice architectures are increasingly in distributed cloud applications due to the advantages on the software composition, development speed, release cycle frequency and the business logic time to market. On the other hand, these architectures also introduce some challenges on the testing and release phases of applications. Container technology solves some of these issues by providing reproducible environments, easy of software distribution and isolation of processes. However, there are other issues that remain unsolved in current container technology when dealing with multiple machines, such as networking for multi-host communication, service discovery, load balancing or data persistency (even though some of these challenges are already solved by traditional cloud vendors in a very mature and widespread manner). Container cluster management tools, such as Kubernetes, Mesos or Docker Swarm, attempt to solve these problems by introducing a new control layer where the unit of deployment is the container (or the pod — a set of strongly related containers that must be deployed on the same machine). These tools are complex to configure and manage and they do not follow a pure immutable infrastructure approach since servers are reused between deployments. Indeed, these tools introduce dependencies at execution time for solving networking or service discovery problems. If an error on the control layer occurs, which would affect running applications, specific expertise is required to perform ad-hoc troubleshooting. As a consequence, it is not surprising that container cluster support is becoming a source of revenue for consulting services. This paper presents i2kit, a deployment tool based on the immutable infrastructure pattern, where the virtual machine is the unit of deployment. The input for i2kit is a declarative definition of a set of microservices, where each microservice is defined as a pod of containers. Microservices are built into machine images using linuxkit —- a tool for creating minimal linux distributions specialized in running containers. These machine images are then deployed to one or more virtual machines, which are exposed through a cloud vendor load balancer. Finally, the load balancer endpoint is set into other microservices using an environment variable, providing service discovery. The toolkit i2kit reuses the best ideas from container technology to solve problems like reproducible environments, process isolation, and software distribution, and at the same time relies on mature, proven cloud vendor technology for networking, load balancing and persistency. The result is a more robust system with no learning curve for troubleshooting running applications. We have implemented an open source prototype that transforms i2kit definitions into AWS cloud formation templates, where each microservice AMI (Amazon Machine Image) is created on the fly using linuxkit. Even though container cluster management tools have more flexibility for resource allocation optimization, we defend that adding a new control layer implies more important disadvantages. Resource allocation is greatly improved by using linuxkit, which introduces a very small footprint (around 35MB). Also, the system is more secure since linuxkit installs the minimum set of dependencies to run containers. The toolkit i2kit is currently under development at the IMDEA Software Institute.Keywords: container, deployment, immutable infrastructure, microservice
Procedia PDF Downloads 179107 Improving the Accuracy of Stress Intensity Factors Obtained by Scaled Boundary Finite Element Method on Hybrid Quadtree Meshes
Authors: Adrian W. Egger, Savvas P. Triantafyllou, Eleni N. Chatzi
Abstract:
The scaled boundary finite element method (SBFEM) is a semi-analytical numerical method, which introduces a scaling center in each element’s domain, thus transitioning from a Cartesian reference frame to one resembling polar coordinates. Consequently, an analytical solution is achieved in radial direction, implying that only the boundary need be discretized. The only limitation imposed on the resulting polygonal elements is that they remain star-convex. Further arbitrary p- or h-refinement may be applied locally in a mesh. The polygonal nature of SBFEM elements has been exploited in quadtree meshes to alleviate all issues conventionally associated with hanging nodes. Furthermore, since in 2D this results in only 16 possible cell configurations, these are precomputed in order to accelerate the forward analysis significantly. Any cells, which are clipped to accommodate the domain geometry, must be computed conventionally. However, since SBFEM permits polygonal elements, significantly coarser meshes at comparable accuracy levels are obtained when compared with conventional quadtree analysis, further increasing the computational efficiency of this scheme. The generalized stress intensity factors (gSIFs) are computed by exploiting the semi-analytical solution in radial direction. This is initiated by placing the scaling center of the element containing the crack at the crack tip. Taking an analytical limit of this element’s stress field as it approaches the crack tip, delivers an expression for the singular stress field. By applying the problem specific boundary conditions, the geometry correction factor is obtained, and the gSIFs are then evaluated based on their formal definition. Since the SBFEM solution is constructed as a power series, not unlike mode superposition in FEM, the two modes contributing to the singular response of the element can be easily identified in post-processing. Compared to the extended finite element method (XFEM) this approach is highly convenient, since neither enrichment terms nor a priori knowledge of the singularity is required. Computation of the gSIFs by SBFEM permits exceptional accuracy, however, when combined with hybrid quadtrees employing linear elements, this does not always hold. Nevertheless, it has been shown that crack propagation schemes are highly effective even given very coarse discretization since they only rely on the ratio of mode one to mode two gSIFs. The absolute values of the gSIFs may still be subject to large errors. Hence, we propose a post-processing scheme, which minimizes the error resulting from the approximation space of the cracked element, thus limiting the error in the gSIFs to the discretization error of the quadtree mesh. This is achieved by h- and/or p-refinement of the cracked element, which elevates the amount of modes present in the solution. The resulting numerical description of the element is highly accurate, with the main error source now stemming from its boundary displacement solution. Numerical examples show that this post-processing procedure can significantly improve the accuracy of the computed gSIFs with negligible computational cost even on coarse meshes resulting from hybrid quadtrees.Keywords: linear elastic fracture mechanics, generalized stress intensity factors, scaled finite element method, hybrid quadtrees
Procedia PDF Downloads 146106 Construction of an Assessment Tool for Early Childhood Development in the World of DiscoveryTM Curriculum
Authors: Divya Palaniappan
Abstract:
Early Childhood assessment tools must measure the quality and the appropriateness of a curriculum with respect to culture and age of the children. Preschool assessment tools lack psychometric properties and were developed to measure only few areas of development such as specific skills in music, art and adaptive behavior. Existing preschool assessment tools in India are predominantly informal and are fraught with judgmental bias of observers. The World of Discovery TM curriculum focuses on accelerating the physical, cognitive, language, social and emotional development of pre-schoolers in India through various activities. The curriculum caters to every child irrespective of their dominant intelligence as per Gardner’s Theory of Multiple Intelligence which concluded "even students as young as four years old present quite distinctive sets and configurations of intelligences". The curriculum introduces a new theme every week where, concepts are explained through various activities so that children with different dominant intelligences could understand it. For example: The ‘Insects’ theme is explained through rhymes, craft and counting corner, and hence children with one of these dominant intelligences: Musical, bodily-kinesthetic and logical-mathematical could grasp the concept. The child’s progress is evaluated using an assessment tool that measures a cluster of inter-dependent developmental areas: physical, cognitive, language, social and emotional development, which for the first time renders a multi-domain approach. The assessment tool is a 5-point rating scale that measures these Developmental aspects: Cognitive, Language, Physical, Social and Emotional. Each activity strengthens one or more of the developmental aspects. During cognitive corner, the child’s perceptual reasoning, pre-math abilities, hand-eye co-ordination and fine motor skills could be observed and evaluated. The tool differs from traditional assessment methodologies by providing a framework that allows teachers to assess a child’s continuous development with respect to specific activities in real time objectively. A pilot study of the tool was done with a sample data of 100 children in the age group 2.5 to 3.5 years. The data was collected over a period of 3 months across 10 centers in Chennai, India, scored by the class teacher once a week. The teachers were trained by psychologists on age-appropriate developmental milestones to minimize observer’s bias. The norms were calculated from the mean and standard deviation of the observed data. The results indicated high internal consistency among parameters and that cognitive development improved with physical development. A significant positive relationship between physical and cognitive development has been observed among children in a study conducted by Sibley and Etnier. In Children, the ‘Comprehension’ ability was found to be greater than ‘Reasoning’ and pre-math abilities as indicated by the preoperational stage of Piaget’s theory of cognitive development. The average scores of various parameters obtained through the tool corroborates the psychological theories on child development, offering strong face validity. The study provides a comprehensive mechanism to assess a child’s development and differentiate high performers from the rest. Based on the average scores, the difficulty level of activities could be increased or decreased to nurture the development of pre-schoolers and also appropriate teaching methodologies could be devised.Keywords: child development, early childhood assessment, early childhood curriculum, quantitative assessment of preschool curriculum
Procedia PDF Downloads 362105 The Dynamic Nexus of Public Health and Journalism in Informed Societies
Authors: Ali Raza
Abstract:
The dynamic landscape of communication has brought about significant advancements that intersect with the realms of public health and journalism. This abstract explores the evolving synergy between these fields, highlighting how their intersection has contributed to informed societies and improved public health outcomes. In the digital age, communication plays a pivotal role in shaping public perception, policy formulation, and collective action. Public health, concerned with safeguarding and improving community well-being, relies on effective communication to disseminate information, encourage healthy behaviors, and mitigate health risks. Simultaneously, journalism, with its commitment to accurate and timely reporting, serves as the conduit through which health information reaches the masses. Advancements in communication technologies have revolutionized the ways in which public health information is both generated and shared. The advent of social media platforms, mobile applications, and online forums has democratized the dissemination of health-related news and insights. This democratization, however, brings challenges, such as the rapid spread of misinformation and the need for nuanced strategies to engage diverse audiences. Effective collaboration between public health professionals and journalists is pivotal in countering these challenges, ensuring that accurate information prevails. The synergy between public health and journalism is most evident during public health crises. The COVID-19 pandemic underscored the pivotal role of journalism in providing accurate and up-to-date information to the public. However, it also highlighted the importance of responsible reporting, as sensationalism and misinformation could exacerbate the crisis. Collaborative efforts between public health experts and journalists led to the amplification of preventive measures, the debunking of myths, and the promotion of evidence-based interventions. Moreover, the accessibility of information in the digital era necessitates a strategic approach to health communication. Behavioral economics and data analytics offer insights into human decision-making and allow tailored health messages to resonate more effectively with specific audiences. This approach, when integrated into journalism, enables the crafting of narratives that not only inform but also influence positive health behaviors. Ethical considerations emerge prominently in this alliance. The responsibility to balance the public's right to know with the potential consequences of sensational reporting underscores the significance of ethical journalism. Health journalists must meticulously source information from reputable experts and institutions to maintain credibility, thus fortifying the bridge between public health and the public. As both public health and journalism undergo transformative shifts, fostering collaboration between these domains becomes essential. Training programs that familiarize journalists with public health concepts and practices can enhance their capacity to report accurately and comprehensively on health issues. Likewise, public health professionals can gain insights into effective communication strategies from seasoned journalists, ensuring that health information reaches a wider audience. In conclusion, the convergence of public health and journalism, facilitated by communication advancements, is a cornerstone of informed societies. Effective communication strategies, driven by collaboration, ensure the accurate dissemination of health information and foster positive behavior change. As the world navigates complex health challenges, the continued evolution of this synergy holds the promise of healthier communities and a more engaged and educated public.Keywords: public awareness, journalism ethics, health promotion, media influence, health literacy
Procedia PDF Downloads 70