Search results for: comparison different types
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9993

Search results for: comparison different types

2283 Improvement of the Q-System Using the Rock Engineering System: A Case Study of Water Conveyor Tunnel of Azad Dam

Authors: Sahand Golmohammadi, Sana Hosseini Shirazi

Abstract:

Because the status and mechanical parameters of discontinuities in the rock mass are included in the calculations, various methods of rock engineering classification are often used as a starting point for the design of different types of structures. The Q-system is one of the most frequently used methods for stability analysis and determination of support systems of underground structures in rock, including tunnel. In this method, six main parameters of the rock mass, namely, the rock quality designation (RQD), joint set number (Jn), joint roughness number (Jr), joint alteration number (Ja), joint water parameter (Jw) and stress reduction factor (SRF) are required. In this regard, in order to achieve a reasonable and optimal design, identifying the effective parameters for the stability of the mentioned structures is one of the most important goals and the most necessary actions in rock engineering. Therefore, it is necessary to study the relationships between the parameters of a system and how they interact with each other and, ultimately, the whole system. In this research, it has attempted to determine the most effective parameters (key parameters) from the six parameters of rock mass in the Q-system using the rock engineering system (RES) method to improve the relationships between the parameters in the calculation of the Q value. The RES system is, in fact, a method by which one can determine the degree of cause and effect of a system's parameters by making an interaction matrix. In this research, the geomechanical data collected from the water conveyor tunnel of Azad Dam were used to make the interaction matrix of the Q-system. For this purpose, instead of using the conventional methods that are always accompanied by defects such as uncertainty, the Q-system interaction matrix is coded using a technique that is actually a statistical analysis of the data and determining the correlation coefficient between them. So, the effect of each parameter on the system is evaluated with greater certainty. The results of this study show that the formed interaction matrix provides a reasonable estimate of the effective parameters in the Q-system. Among the six parameters of the Q-system, the SRF and Jr parameters have the maximum and minimum impact on the system, respectively, and also the RQD and Jw parameters have the maximum and minimum impact on the system, respectively. Therefore, by developing this method, we can obtain a more accurate relation to the rock mass classification by weighting the required parameters in the Q-system.

Keywords: Q-system, rock engineering system, statistical analysis, rock mass, tunnel

Procedia PDF Downloads 55
2282 Coronin 1C and miR-128A as Potential Diagnostic Biomarkers for Glioblastoma Multiform

Authors: Denis Mustafov, Emmanouil Karteris, Maria Braoudaki

Abstract:

Glioblastoma multiform (GBM) is a heterogenous primary brain tumour that kills most affected patients. To the authors best knowledge, despite all research efforts there is no early diagnostic biomarker for GBM. MicroRNAs (miRNAs) are short non-coding RNA molecules which are deregulated in many cancers. The aim of this research was to determine miRNAs with a diagnostic impact and to potentially identify promising therapeutic targets for glioblastoma multiform. In silico analysis was performed to identify deregulated miRNAs with diagnostic relevance for glioblastoma. The expression profiles of the chosen miRNAs were then validated in vitro in the human glioblastoma cell lines A172 and U-87MG. Briefly, RNA extraction was carried out using the Trizol method, whilst miRNA extraction was performed using the mirVANA miRNA isolation kit. Quantitative Real-Time Polymerase Chain Reaction was performed to verify their expression. The presence of five target proteins within the A172 cell line was evaluated by Western blotting. The expression of the CORO1C protein within 32 GBM cases was examined via immunohistochemistry. The miRNAs identified in silico included miR-21-5p, miR-34a and miR-128a. These miRNAs were shown to target deregulated GBM genes, such as CDK6, E2F3, BMI1, JAG1, and CORO1C. miR-34a and miR-128a showed low expression profiles in comparison to a control miR-RNU-44 in both GBM cell lines suggesting tumour suppressor properties. Opposing, miR-21-5p demonstrated greater expression indicating that it could potentially function as an oncomiR. Western blotting revealed expression of all five proteins within the A172 cell line. In silico analysis also suggested that CORO1C is a target of miR-128a and miR-34a. Immunohistochemistry demonstrated that 75% of the GBM cases showed moderate to high expression of CORO1C protein. Greater understanding of the deregulated expression of miR-128a and the upregulation of CORO1C in GBM could potentially lead to the identification of a promising diagnostic biomarker signature for glioblastomas.

Keywords: non-coding RNAs, gene expression, brain tumours, immunohistochemistry

Procedia PDF Downloads 72
2281 Development of Hydrodynamic Drag Calculation and Cavity Shape Generation for Supercavitating Torpedoes

Authors: Sertac Arslan, Sezer Kefeli

Abstract:

In this paper, firstly supercavitating phenomenon and supercavity shape design parameters are explained and then drag force calculation methods of high speed supercavitating torpedoes are investigated with numerical techniques and verified with empirical studies. In order to reach huge speeds such as 200, 300 knots for underwater vehicles, hydrodynamic hull drag force which is proportional to density of water (ρ) and square of speed should be reduced. Conventional heavy weight torpedoes could reach up to ~50 knots by classic underwater hydrodynamic techniques. However, to exceed 50 knots and reach about 200 knots speeds, hydrodynamic viscous forces must be reduced or eliminated completely. This requirement revives supercavitation phenomena that could be implemented to conventional torpedoes. Supercavitation is the use of cavitation effects to create a gas bubble, allowing the torpedo to move at huge speed through the water by being fully developed cavitation bubble. When the torpedo moves in a cavitation envelope due to cavitator in nose section and solid fuel rocket engine in rear section, this kind of torpedoes could be entitled as Supercavitating Torpedoes. There are two types of cavitation; first one is natural cavitation, and second one is ventilated cavitation. In this study, disk cavitator is modeled with natural cavitation and supercavitation phenomenon parameters are studied. Moreover, drag force calculation is performed for disk shape cavitator with numerical techniques and compared via empirical studies. Drag forces are calculated with computational fluid dynamics methods and different empirical methods. Numerical calculation method is developed by comparing with empirical results. In verification study cavitation number (σ), drag coefficient (CD) and drag force (D), cavity wall velocity (U

Keywords: cavity envelope, CFD, high speed underwater vehicles, supercavitation, supercavity flows

Procedia PDF Downloads 169
2280 Removal of Nickel Ions from Industrial Effluents by Batch and Column Experiments: A Comparison of Activated Carbon with Pinus Roxburgii Saw Dust

Authors: Sardar Khana, Zar Ali Khana

Abstract:

Rapid industrial development and urbanization contribute a lot to wastewater discharge. The wastewater enters into natural aquatic ecosystems from industrial activities and considers as one of the main sources of water pollution. Discharge of effluents loaded with heavy metals into the surrounding environment has become a key issue regarding human health risk, environment, and food chain contamination. Nickel causes fatigue, cancer, headache, heart problems, skin diseases (Nickel Itch), and respiratory disorders. Nickel compounds such as Nickel Sulfide and Nickel oxides in industrial environment, if inhaled, have an association with an increased risk of lung cancer. Therefore the removal of Nickel from effluents before discharge is necessary. Removal of Nickel by low-cost biosorbents is an efficient method. This study was aimed to investigate the efficiency of activated carbon and Pinusroxburgiisaw dust for the removal of Nickel from industrial effluents using commercial Activated Carbon, and raw P.roxburgii saw dust. Batch and column adsorption experiments were conducted for the removal of Nickel. The study conducted indicates that removal of Nickel greatly dependent on pH, contact time, Nickel concentration, and adsorbent dose. Maximum removal occurred at pH 9, contact time of 600 min, and adsorbent dose of 1 g/100 mL. The highest removal was 99.62% and 92.39% (pH based), 99.76% and 99.9% (dose based), 99.80% and 100% (agitation time), 92% and 72.40% (Ni Conc. based) for P.roxburgii saw dust and activated Carbon, respectively. Similarly, the Ni removal in column adsorption was 99.77% and 99.99% (bed height based), 99.80% and 99.99% (Concentration based), 99.98%, and 99.81% (flow rate based) during column studies for Nickel using P.Roxburgiisaw dust and activated carbon, respectively. Results were compared with Freundlich isotherm model, which showed “r2” values of 0.9424 (Activated carbon) and 0.979 (P.RoxburgiiSaw Dust). While Langmuir isotherm model values were 0.9285 (Activated carbon) and 0.9999 (P.RoxburgiiSaw Dust), the experimental results were fitted to both the models. But the results were in close agreement with Langmuir isotherm model.

Keywords: nickel removal, batch, and column, activated carbon, saw dust, plant uptake

Procedia PDF Downloads 116
2279 Comparison of Susceptibility to Measles in Preterm Infants versus Term Infants

Authors: Joseph L. Mathew, Shourjendra N. Banerjee, R. K. Ratho, Sourabh Dutta, Vanita Suri

Abstract:

Background: In India and many other developing countries, a single dose of measles vaccine is administered to infants at 9 months of age. This is based on the assumption that maternal transplacentally transferred antibodies will protect infants until that age. However, our previous data showed that most infants lose maternal anti-measles antibodies before 6 months of age, making them susceptible to measles before vaccination at 9 months. Objective: This prospective study was designed to compare susceptibility in pre-term vs term infants, at different time points. Material and Methods: Following Institutional Ethics Committee approval and a formal informed consent process, venous blood was drawn from a cohort of 45 consecutive term infants and 45 consecutive pre-term infants (both groups delivered by the vaginal route); at birth, 3 months, 6 months and 9 months (prior to measles vaccination). Serum was separated and anti-measles IgG antibody levels were measured by quantitative ELISA kits (with sensitivity and specificity > 95%). Susceptibility to measles was defined as antibody titre < 200mIU/ml. The mean antibody levels were compared between the two groups at the four time points. Results: The mean gestation of term babies was 38.5±1.2 weeks; and pre-term babies 34.7±2.8 weeks. The respective mean birth weights were 2655±215g and 1985±175g. Reliable maternal vaccination record was available in only 7 of the 90 mothers. Mean anti-measles IgG antibody (±SD) in terms babies was 3165±533 IU/ml at birth, 1074±272 IU/ml at 3 months, 314±153 IU/ml at 6 months, and 68±21 IU/ml at 9 months. The corresponding levels in pre-term babies were 2875±612 IU/ml, 948±377 IU/ml, 265±98 IU/ml, and 72±33 IU/ml at 9 months (p > 0.05 for all inter-group comparisons). The proportion of susceptible term infants at birth, 3months, 6months and 9months was 0%, 16%, 67% and 96%. The corresponding proportions in the pre-term infants were 0%, 29%, 82%, and 100% (p > 0.05 for all inter-group comparisons). Conclusion: Majority of infants are susceptible to measles before 9 months of age suggesting the need to anticipate measles vaccination, but there was no statistically significant difference between the proportion of susceptible term and pre-term infants, at any of the four-time points. A larger study is required to confirm these findings and compare sero-protection if vaccination is anticipated to be administered between 6 and 9 months.

Keywords: measles, preterm, susceptibility, term infant

Procedia PDF Downloads 255
2278 Effect of Different Methods to Control the Parasitic Weed Phelipanche ramosa (L. Pomel) in Tomato Crop

Authors: Disciglio G., Lops F., Carlucci A., Gatta G., Tarantino A., Frabboni L, Tarantino E.

Abstract:

The Phelipanche ramosa is considered the most damaging obligate flowering parasitic weed on a wide species of cultivated plants. The semiarid regions of the world are considered the main center of this parasitic weed, where heavy infestation are due to the ability to produce high numbers of seeds (up to 200,000), that remain viable for extended period (more than 19 years). In this paper 13 treatments of parasitic weed control, as physical, chemical, biological and agronomic methods, including the use of the resistant plants, have been carried out. In 2014 a trial was performed on processing tomato (cv Docet), grown in pots filled with soil taken from a plot heavily infested by Phelipanche ramosa, at the Department of Agriculture, Food and Environment, University of Foggia (southern Italy). Tomato seedlings were transplanted on August 8, 2014 on a clay soil (USDA) 100 kg ha-1 of N; 60 kg ha-1 of P2O5 and 20 kg ha-1 of S. Afterwards, top dressing was performed with 70 kg ha-1 of N. The randomized block design with 3 replicates was adopted. During the growing cycle of the tomato, at 70-75-81 and 88 days after transplantation the number of parasitic shoots emerged in each pot was detected. Also values of leaf chlorophyll Meter SPAD of tomato plants were measured. All data were subjected to analysis of variance (ANOVA) using the JMP software (SAS Institute Inc., Cary, NC, USA), and for comparison of means was used Tukey's test. The results show lower values of the color index SPAD in tomato plants parasitized compared to those healthy. In addition, each treatment studied did not provide complete control against Phelipanche ramosa. However the virulence of the attacks was mitigated by some treatments: radicon product, compost activated with Fusarium, mineral fertilizer nitrogen, sulfur, enzone and resistant tomato genotype. It is assumed that these effects can be improved by combining some of these treatments each other, especially for a gradual and continuing reduction of the “seed bank” of the parasite in the soil.

Keywords: control methods, Phelipanche ramose, tomato crop

Procedia PDF Downloads 605
2277 3D Microscopy, Image Processing, and Analysis of Lymphangiogenesis in Biological Models

Authors: Thomas Louis, Irina Primac, Florent Morfoisse, Tania Durre, Silvia Blacher, Agnes Noel

Abstract:

In vitro and in vivo lymphangiogenesis assays are essential for the identification of potential lymphangiogenic agents and the screening of pharmacological inhibitors. In the present study, we analyse three biological models: in vitro lymphatic endothelial cell spheroids, in vivo ear sponge assay, and in vivo lymph node colonisation by tumour cells. These assays provide suitable 3D models to test pro- and anti-lymphangiogenic factors or drugs. 3D images were acquired by confocal laser scanning and light sheet fluorescence microscopy. Virtual scan microscopy followed by 3D reconstruction by image aligning methods was also used to obtain 3D images of whole large sponge and ganglion samples. 3D reconstruction, image segmentation, skeletonisation, and other image processing algorithms are described. Fixed and time-lapse imaging techniques are used to analyse lymphatic endothelial cell spheroids behaviour. The study of cell spatial distribution in spheroid models enables to detect interactions between cells and to identify invasion hierarchy and guidance patterns. Global measurements such as volume, length, and density of lymphatic vessels are measured in both in vivo models. Branching density and tortuosity evaluation are also proposed to determine structure complexity. Those properties combined with vessel spatial distribution are evaluated in order to determine lymphangiogenesis extent. Lymphatic endothelial cell invasion and lymphangiogenesis were evaluated under various experimental conditions. The comparison of these conditions enables to identify lymphangiogenic agents and to better comprehend their roles in the lymphangiogenesis process. The proposed methodology is validated by its application on the three presented models.

Keywords: 3D image segmentation, 3D image skeletonisation, cell invasion, confocal microscopy, ear sponges, light sheet microscopy, lymph nodes, lymphangiogenesis, spheroids

Procedia PDF Downloads 360
2276 Political Antinomy and Its Resolution in Islam

Authors: Abdul Nasir Zamir

Abstract:

After the downfall of Ottoman Caliphate, it scattered into different small Muslim states. Muslim leaders, intellectuals, revivalists as well as modernists started trying to boost up their nation. Some Muslims are also trying to establish the caliphate. Every Muslim country has its own political system, i.e., kingship, dictatorship or democracy, etc. But these are not in their original forms as the historian or political science discussed in their studies. The laws and their practice are mixed, i.e., others with Islamic laws, e.g., Saudi Arabia (K.S.A) and the Islamic Republic of Pakistan, etc. There is great conflict among the revivalist Muslim parties (groups) and governments about political systems. The question is that the subject matter is Sharia or political system? Leaders of Modern Muslim states are alleged as disbelievers due to neglecting the revelation in their laws and decisions. There are two types of laws; Islamic laws and management laws. The conflict is that the non-Islamic laws are in practice in Muslim states. Non-Islamic laws can be gradually changed with Islamic laws with a legal and peaceful process according to the practice of former Muslim leaders and scholars. The bloodshed of Muslims is not allowed in any case. Weak Muslim state is a blessing than nothing. The political system after Muhammad and guided caliphs is considered as kingship. But during this period Muslims not only developed in science and technology but conquered many territories also. If the original aim is in practice, then the Modern Muslim states can be stabled with different political systems. Modern Muslim states are the hope of survival, stability, and development of Muslim Ummah. Islam does not allow arm clash with Muslim army or Muslim civilians. The caliphate is based on believing in one Allah Almighty and good deeds according to Quran and Sunnah. As faith became weak and good deeds became less from its standard level, caliphate automatically became weak and even ended. The last weak caliphate was Ottoman Caliphate which was a hope of all the Muslims of the world. There is no caliphate or caliph present in the world. But every Muslim country or state is like an Amarat (a part of caliphate or small and alternate form of the caliphate) of Muslims. It is the duty of all Muslims to stable these modern Muslim states with tolerance.

Keywords: caliphate, conflict resolution, modern Muslim state, political conflicts, political systems, tolerance

Procedia PDF Downloads 141
2275 Saccharification and Bioethanol Production from Banana Pseudostem

Authors: Elias L. Souza, Noeli Sellin, Cintia Marangoni, Ozair Souza

Abstract:

Among the different forms of reuse and recovery of agro-residual waste is the production of biofuels. The production of second-generation ethanol has been evaluated and proposed as one of the technically viable alternatives for this purpose. This research work employed the banana pseudostem as biomass. Two different chemical pre-treatment methods (acid hydrolisis with H2SO4 2% w/w and alkaline hydrolysis with NaOH 3% w/w) of dry and milled biomass (70 g/L of dry matter, ms) were assessed, and the corresponding reducing sugars yield, AR, (YAR), after enzymatic saccharification, were determined. The effect on YAR by increasing the dry matter (ms) from 70 to 100 g/L, in dry and milled biomass and also fresh, were analyzed. Changes in cellulose crystallinity and in biomass surface morphology due to the different chemical pre-treatments were analyzed by X-ray diffraction and scanning electron microscopy. The acid pre-treatment resulted in higher YAR values, whether related to the cellulose content under saccharification (RAR = 79,48) or to the biomass concentration employed (YAR/ms = 32,8%). In a comparison between alkaline and acid pre-treatments, the latter led to an increase in the cellulose content of the reaction mixture from 52,8 to 59,8%; also, to a reduction of the cellulose crystallinity index from 51,19 to 33,34% and increases in RAR (43,1%) and YAR/ms (39,5%). The increase of dry matter (ms) bran from 70 to 100 g/L in the acid pre-treatment, resulted in a decrease of average yields in RAR (43,1%) and YAR/ms (18,2%). Using the pseudostem fresh with broth removed, whether for 70 g/L concentration or 100 g/L in dry matter (ms), similarly to the alkaline pre-treatment, has led to lower average values in RAR (67,2% and 42,2%) and in YAR/ms (28,4% e 17,8%), respectively. The acid pre-treated and saccharificated biomass broth was detoxificated with different activated carbon contents (1,2 and 4% w/v), concentrated up to AR = 100 g/L and fermented by Saccharomyces cerevisiae. The yield values (YP/AR) and productivity (QP) in ethanol were determined and compared to those values obtained from the fermentation of non-concentrated/non-detoxificated broth (AR = 18 g/L) and concentrated/non-detoxificated broth (AR = 100 g/L). The highest average value for YP/AR (0,46 g/g) was obtained from the fermentation of non-concentrated broth. This value did not present a significant difference (p<0,05) when compared to the YP/RS related to the broth concentrated and detoxificated by activated carbon 1% w/v (YP/AR = 0,41 g/g). However, a higher ethanol productivity (QP = 1,44 g/L.h) was achieved through broth detoxification. This value was 75% higher than the average QP determined using concentrated and non-detoxificated broth (QP = 0,82 g/L.h), and 22% higher than the QP found in the non-concentrated broth (QP = 1,18 g/L.h).

Keywords: biofuels, biomass, saccharification, bioethanol

Procedia PDF Downloads 334
2274 Degradation of Commercial Polychlorinated Biphenyl Mixture by Naturally Occurring Facultative Microorganisms via Anaerobic Dechlorination and Aerobic Oxidation

Authors: P. M. G. Pathiraja, P. Egodawatta, A. Goonetilleke, V. S. J. Te'o

Abstract:

The production and use of Polychlorinated biphenyls (PCBs), a group of synthetic halogenated hydrocarbons have been restricted worldwide due to its toxicity and categorized as one of the twelve priority persistent organic pollutants (POP) by the Stockholm Convention. Low reactivity and high chemical stability of PCBs have made them highly persistent in the environment and bio-concentration and bio-magnification along the food chain contribute to multiple health impacts in humans and animals. Remediating environments contaminated with PCBs is a challenging task for decades. Use of microorganisms for remediation of PCB contaminated soils and sediments have been widely investigated due to the potential of breakdown these complex contaminants with minimum environmental impacts. To achieve an effective bioremediation of polychlorinated biphenyls (PCBs) contaminated environments, microbes were sourced from environmental samples and tested for their ability to hydrolyze PCBs under different conditions. Comparison of PCB degradation efficiencies of four naturally occurring facultative bacterial cultures isolated through selective enrichment under aerobic and anaerobic conditions were simultaneously investigated in minimal salt medium using 50 mg/L Aroclor 1260, a commonly used commercial PCB mixture as the sole source of carbon. The results of a six-week study demonstrated that all the tested facultative Achromobacter, Ochrobactrum, Lysinibacillus and Pseudomonas strains are capable of degrading PCBs under both anaerobic and aerobic conditions while assisting hydrophobic PCBs to make solubilize in the aqueous minimal medium. Overall, the results suggest that some facultative bacteria are capable of effective in degrading PCBs under anaerobic conditions through reductive dechlorination and under aerobic conditions through oxidation. Therefore, use of suitable facultative microorganisms under combined anaerobic-aerobic conditions and combination of such strains capable of solubilization and breakdown of PCBs has high potential in achieving higher PCB removal rates.

Keywords: bioremediation, combined anaerobic-aerobic degradation, facultative microorganisms, polychlorinated biphenyls

Procedia PDF Downloads 227
2273 The Effect of General Data Protection Regulation on South Asian Data Protection Laws

Authors: Sumedha Ganjoo, Santosh Goswami

Abstract:

The rising reliance on technology places national security at the forefront of 21st-century issues. It complicates the efforts of emerging and developed countries to combat cyber threats and increases the inherent risk factors connected with technology. The inability to preserve data securely might have devastating repercussions on a massive scale. Consequently, it is vital to establish national, regional, and global data protection rules and regulations that penalise individuals who participate in immoral technology usage and exploit the inherent vulnerabilities of technology. This study paper seeks to analyse GDPR-inspired Bills in the South Asian Region and determine their suitability for the development of a worldwide data protection framework, considering that Asian countries are much more diversified than European ones. In light of this context, the objectives of this paper are to identify GDPR-inspired Bills in the South Asian Region, identify their similarities and differences, as well as the obstacles to developing a regional-level data protection mechanism, thereby satisfying the need to develop a global-level mechanism. Due to the qualitative character of this study, the researcher did a comprehensive literature review of prior research papers, journal articles, survey reports, and government publications on the aforementioned topics. Taking into consideration the survey results, the researcher conducted a critical analysis of the significant parameters highlighted in the literature study. Many nations in the South Asian area are in the process of revising their present data protection measures in accordance with GDPR, according to the primary results of this study. Consideration is given to the data protection laws of Thailand, Malaysia, China, and Japan. Significant parallels and differences in comparison to GDPR have been discussed in detail. The conclusion of the research analyses the development of various data protection legislation regimes in South Asia.

Keywords: data privacy, GDPR, Asia, data protection laws

Procedia PDF Downloads 70
2272 Myocardial Reperfusion Injury during Percutaneous Coronary Intervention in Patient with Triple-Vessel Disease in Limited Resources Hospital: A Case Report

Authors: Fanniyah Anis, Bram Kilapong

Abstract:

Myocardial reperfusion injury is defined as the cellular damage that results from a period of ischemia, followed by the reestablishment of the blood supply to the infarcted tissue. Ventricular tachycardia is one of the most commonly encountered reperfusion arrhythmia as one of the types of myocardial perfusion injury. Prompt and early treatment can reduce mortality, despite limited resources of the hospital in high risk patients with history of triple vessel disease. Case report, Male 53 years old has been diagnosed with NSTEMI with 3VD and comorbid disease of Hypertension and has undergone revascularization management with Percutaneous coronary intervention. Ventricular tachycardia leading to cardiac arrest occurred right after the stent was inserted. Resuscitation was performed for almost 2 hours until spontaneous circulation returned. Patient admitted in ICU with refractory cardiac shock despite using combination of ionotropic and vasopressor agents under standard non-invasive monitoring due to the limitation of the hospital. Angiography was performed again 5 hours later to exclude other possibilities of blockage of coronary arteries and conclude diagnosis of myocardial reperfusion injury. Patient continually managed with combination of antiplatelet agents and maintenance dose of anti-arrhythmia agents. The handling of the patient was to focus more on supportive and preventive from further deteriorating of the condition. Patient showed clinically improvement and regained consciousness within 24 hours. Patient was successfully discharged from ICU within 3 days without any neurological sequela and was discharge from hospital after 3 days observation in general ward. Limited Resource of hospital did not refrain the physician from attaining a good outcome for this myocardial reperfusion injury case and angiography alone can be used to confirm the diagnosis of myocardial reperfusion injury.

Keywords: limited resources hospital, myocardial reperfusion injury, prolonged resuscitation, refractory cardiogenic shock, reperfusion arrhythmia, revascularization, triple-vessel disease

Procedia PDF Downloads 292
2271 Murine Pulmonary Responses after Sub-Chronic Exposure to Environmental Ultrafine Particles

Authors: Yara Saleh, Sebastien Antherieu, Romain Dusautoir, Jules Sotty, Laurent Alleman, Ludivine Canivet, Esperanza Perdrix, Pierre Dubot, Anne Platel, Fabrice Nesslany, Guillaume Garcon, Jean-Marc Lo-Guidice

Abstract:

Air pollution is one of the leading causes of premature death worldwide. Among air pollutants, particulate matter (PM) is a major health risk factor, through the induction of cardiopulmonary diseases and lung cancers. They are composed of coarse, fine and ultrafine particles (PM10, PM2.5, and PM0.1 respectively). Ultrafine particles are emerging unregulated pollutants that might have greater toxicity than larger particles, since they are more abundant and consequently have higher surface area per unit of mass. Our project aims to develop a relevant in vivo model of sub-chronic exposure to atmospheric particles in order to elucidate the specific respiratory impact of ultrafine particles compared to fine particulate matter. Quasi-ultrafine (PM0.18) and fine (PM2.5) particles have been collected in the urban industrial zone of Dunkirk in north France during a 7-month campaign, and submitted to physico-chemical characterization. BALB/c mice were then exposed intranasally to 10µg of PM0.18 or PM2.5 3 times a week. After 1 or 3-month exposure, broncho alveolar lavages (BAL) were performed and lung tissues were harvested for histological and transcriptomic analyses. The physico-chemical study of the collected particles shows that there is no major difference in elemental and surface chemical composition between PM0.18 and PM2.5. Furthermore, the results of the cytological analyses carried out show that both types of particulate fractions can be internalized in lung cells. However, the cell count in BAL and preliminary transcriptomic data suggest that PM0.18 could be more reactive and induce a stronger lung inflammation in exposed mice than PM2.5. Complementary studies are in progress to confirm these first data and to identify the metabolic pathways more specifically associated with the toxicity of ultrafine particles.

Keywords: environmental pollution, lung affect, mice, ultrafine particles

Procedia PDF Downloads 224
2270 Ancient Iran Water Technologies

Authors: Akbar Khodavirdizadeh, Ali Nemati Babaylou, Hassan Moomivand

Abstract:

The history of human access to water technique has been one of the factors in the formation of human civilizations in the ancient world. The technique that makes surface water and groundwater accessible to humans on the ground has been a clever technique in human life to reach the water. In this study, while examining the water technique of ancient Iran using the Qanats technique, the water supply system of different regions of the ancient world were also studied and compared. Six groups of the ancient region of ancient Greece (Archaic 480-750 BC and Classical 223-480 BC), Urartu in Tuspa (600-850 BC), Petra (106-168 BC), Ancient Rome (265 BC), and the ancient United States (1450 BC) and ancient Iranian water technologies were studied under water supply systems. Past water technologies in these areas: water transmission systems in primary urban centers, use of water structures in water control, use of bridges in water transfer, construction of waterways for water transfer, storage of rainfall, construction of various types of pottery- ceramic, lead, wood and stone pipes have been used in water transfer, flood control, water reservoirs, dams, channel, wells, and Qanat. The central plateau of Iran is one of the arid and desert regions. Archaeological, geomorphological, and paleontological studies of the central region of the Iranian plateau showed that without the use of Qanats, the possibility of urban civilization in this region was difficult and even impossible. Zarch aqueduct is the most important aqueduct in Yazd region. Qanat of Zarch is a plain Qanat with a gallery length of 80 km; its mother well is 85 m deep and has 2115 well shafts. The main purpose of building the Qanat of Zārch was to access the groundwater source and transfer it to the surface of the ground. Regarding the structure of the aqueduct and the technique of transferring water from the groundwater source to the surface, it has a great impact on being different from other water techniques in the ancient world. The results show that the use of water technologies in ancient is very important to understand the history of humanity in the use of hydraulic techniques.

Keywords: ancient water technologies, groundwaters, qanat, human history, Ancient Iran

Procedia PDF Downloads 96
2269 Approaches to Reduce the Complexity of Mathematical Models for the Operational Optimization of Large-Scale Virtual Power Plants in Public Energy Supply

Authors: Thomas Weber, Nina Strobel, Thomas Kohne, Eberhard Abele

Abstract:

In context of the energy transition in Germany, the importance of so-called virtual power plants in the energy supply continues to increase. The progressive dismantling of the large power plants and the ongoing construction of many new decentralized plants result in great potential for optimization through synergies between the individual plants. These potentials can be exploited by mathematical optimization algorithms to calculate the optimal application planning of decentralized power and heat generators and storage systems. This also includes linear or linear mixed integer optimization. In this paper, procedures for reducing the number of decision variables to be calculated are explained and validated. On the one hand, this includes combining n similar installation types into one aggregated unit. This aggregated unit is described by the same constraints and target function terms as a single plant. This reduces the number of decision variables per time step and the complexity of the problem to be solved by a factor of n. The exact operating mode of the individual plants can then be calculated in a second optimization in such a way that the output of the individual plants corresponds to the calculated output of the aggregated unit. Another way to reduce the number of decision variables in an optimization problem is to reduce the number of time steps to be calculated. This is useful if a high temporal resolution is not necessary for all time steps. For example, the volatility or the forecast quality of environmental parameters may justify a high or low temporal resolution of the optimization. Both approaches are examined for the resulting calculation time as well as for optimality. Several optimization models for virtual power plants (combined heat and power plants, heat storage, power storage, gas turbine) with different numbers of plants are used as a reference for the investigation of both processes with regard to calculation duration and optimality.

Keywords: CHP, Energy 4.0, energy storage, MILP, optimization, virtual power plant

Procedia PDF Downloads 157
2268 Geospatial Analysis of Hydrological Response to Forest Fires in Small Mediterranean Catchments

Authors: Bojana Horvat, Barbara Karleusa, Goran Volf, Nevenka Ozanic, Ivica Kisic

Abstract:

Forest fire is a major threat in many regions in Croatia, especially in coastal areas. Although they are often caused by natural processes, the most common cause is the human factor, intentional or unintentional. Forest fires drastically transform landscapes and influence natural processes. The main goal of the presented research is to analyse and quantify the impact of the forest fire on hydrological processes and propose the model that best describes changes in hydrological patterns in the analysed catchments. Keeping in mind the spatial component of the processes, geospatial analysis is performed to gain better insight into the spatial variability of the hydrological response to disastrous events. In that respect, two catchments that experienced severe forest fire were delineated, and various hydrological and meteorological data were collected both attribute and spatial. The major drawback is certainly the lack of hydrological data, common in small torrential karstic streams; hence modelling results should be validated with the data collected in the catchment that has similar characteristics and established hydrological monitoring. The event chosen for the modelling is the forest fire that occurred in July 2019 and burned nearly 10% of the analysed area. Surface (land use/land cover) conditions before and after the event were derived from the two Sentinel-2 images. The mapping of the burnt area is based on a comparison of the Normalized Burn Index (NBR) computed from both images. To estimate and compare hydrological behaviour before and after the event, curve number (CN) values are assigned to the land use/land cover classes derived from the satellite images. Hydrological modelling resulted in surface runoff generation and hence prediction of hydrological responses in the catchments to a forest fire event. The research was supported by the Croatian Science Foundation through the project 'Influence of Open Fires on Water and Soil Quality' (IP-2018-01-1645).

Keywords: Croatia, forest fire, geospatial analysis, hydrological response

Procedia PDF Downloads 120
2267 The Organizational Structure, Development Features, and Metadiscoursal Elements in the Expository Writing of College Freshman Students

Authors: Lota Largavista

Abstract:

This study entitled, ‘The Organizational Structure, Development Features, and Metadiscoursal Elements in the Expository Writing of Freshman College Writers’ aimed to examine essays written by college students. It seeks to examine the organizational structure and development features of the essays and describe their defining characteristics, the linguistic elements at both macrostructural and microstructural discourse levels and the types of textual and interpersonal metadiscourse markers that are employed in order to negotiate meanings with their prospective readers. The different frameworks used to analyze the essays include Toulmin’s ( 1984) model for argument structure, Olson’s ( 2003) three-part essay structure; Halliday and Matthiesen (2004) in Herriman (2011) notions of thematic structure, Danes (1974) thematic progression or method of development, Halliday’s (2004) concept of grammatical and lexical cohesion ;Hyland’s (2005) metadiscourse strategies; and Chung and Nation’s( 2003) four-step scale for technical vocabulary. This descriptive study analyzes qualitatively and quantitatively how freshman students generally express their written compositions. Coding of units is done to determine what linguistic features are present in the essays. Findings revealed that students’ expository essays observe a three-part structure having all three moves, the Introduction, the Body and the Conclusion. Stance assertion, stance support, and emerging moves/strategies are found to be employed in the essays. Students have more marked themes on the essays and also prefer constant theme progression as their method of development. The analysis of salient linguistic elements reveals frequently used cohesive devices and metadiscoursal strategies. Based on the findings, an instructional learning plan is being proposed. This plan is characterized by a genre approach that focuses on expository and linguistic conventions.

Keywords: metadiscourse, organization, theme progression, structure

Procedia PDF Downloads 222
2266 Study of Error Analysis and Sources of Uncertainty in the Measurement of Residual Stresses by the X-Ray Diffraction

Authors: E. T. Carvalho Filho, J. T. N. Medeiros, L. G. Martinez

Abstract:

Residual stresses are self equilibrating in a rigid body that acts on the microstructure of the material without application of an external load. They are elastic stresses and can be induced by mechanical, thermal and chemical processes causing a deformation gradient in the crystal lattice favoring premature failure in mechanicals components. The search for measurements with good reliability has been of great importance for the manufacturing industries. Several methods are able to quantify these stresses according to physical principles and the response of the mechanical behavior of the material. The diffraction X-ray technique is one of the most sensitive techniques for small variations of the crystalline lattice since the X-ray beam interacts with the interplanar distance. Being very sensitive technique is also susceptible to variations in measurements requiring a study of the factors that influence the final result of the measurement. Instrumental, operational factors, form deviations of the samples and geometry of analyzes are some variables that need to be considered and analyzed in order for the true measurement. The aim of this work is to analyze the sources of errors inherent to the residual stress measurement process by X-ray diffraction technique making an interlaboratory comparison to verify the reproducibility of the measurements. In this work, two specimens were machined, differing from each other by the surface finishing: grinding and polishing. Additionally, iron powder with particle size less than 45 µm was selected in order to be a reference (as recommended by ASTM E915 standard) for the tests. To verify the deviations caused by the equipment, those specimens were positioned and with the same analysis condition, seven measurements were carried out at 11Ψ tilts. To verify sample positioning errors, seven measurements were performed by positioning the sample at each measurement. To check geometry errors, measurements were repeated for the geometry and Bragg Brentano parallel beams. In order to verify the reproducibility of the method, the measurements were performed in two different laboratories and equipments. The results were statistically worked out and the quantification of the errors.

Keywords: residual stress, x-ray diffraction, repeatability, reproducibility, error analysis

Procedia PDF Downloads 167
2265 Numerical Simulation of Precast Concrete Panels for Airfield Pavement

Authors: Josef Novák, Alena Kohoutková, Vladimír Křístek, Jan Vodička

Abstract:

Numerical analysis software belong to the main tools for simulating the real behavior of various concrete structures and elements. In comparison with experimental tests, they offer an affordable way to study the mechanical behavior of structures under various conditions. The contribution deals with a precast element of an innovative airfield pavement system which is being developed within an ongoing scientific project. The proposed system consists a two-layer surface course of precast concrete panels positioned on a two-layer base of fiber-reinforced concrete with recycled aggregate. As the panels are supposed to be installed directly on the hardened base course, imperfections at the interface between the base course and surface course are expected. Considering such circumstances, three various behavior patterns could be established and considered when designing the precast element. Enormous costs of full-scale experiments force to simulate the behavior of the element in a numerical analysis software using finite element method. The simulation was conducted on a nonlinear model in order to obtain such results which could fully compensate results from the experiments. First, several loading schemes were considered with the aim to observe the critical one which was used for the simulation later on. The main objective of the simulation was to optimize reinforcement of the element subject to quasi-static loading from airplanes. When running the simulation several parameters were considered. Namely, it concerns geometrical imperfections, manufacturing imperfections, stress state in reinforcement, stress state in concrete and crack width. The numerical simulation revealed that the precast element should be heavily reinforced to fulfill all the demands assumed. The main cause of using high amount of reinforcement is the size of the imperfections which could occur at real structure. Improving manufacturing quality, the installation of the precast panels on a fresh base course or using a bedding layer underneath the surface course belong to the main steps how to reduce the size of imperfections and consequently lower the consumption of reinforcement.

Keywords: nonlinear analysis, numerical simulation, precast concrete, pavement

Procedia PDF Downloads 242
2264 Everolimus Loaded Polyvinyl Alcohol Microspheres for Sustained Drug Delivery in the Treatment of Subependymal Giant Cell Astrocytoma

Authors: Lynn Louis, Bor Shin Chee, Marion McAfee, Michael Nugent

Abstract:

This article aims to develop a sustained release formulation of microspheres containing the mTOR inhibitor Everolimus (EVR) using Polyvinyl alcohol (PVA) to enhance the bioavailability of the drug and to overcome poor solubility characteristics of Everolimus. This paper builds on recent work in the manufacture of microspheres using the sessile droplet technique by freezing the polymer-drug solution by suspending the droplets into pre-cooled ethanol vials immersed in liquid nitrogen. The spheres were subjected to 6 freezing cycles and 3 freezing cycles with thawing to obtain proper geometry, prevent aggregation, and achieve physical cross-linking. The prepared microspheres were characterised for surface morphology by SEM, where a 3-D porous structure was observed. The in vitro release studies showed a 62.17% release over 12.5 days, indicating a sustained release due to good encapsulation. This result is comparatively much more than the 49.06% release achieved within 4 hours from the solvent cast Everolimus film as a control with no freeze-thaw cycles performed. The solvent cast films were made in this work for comparison. A prolonged release of Everolimus using a polymer-based drug delivery system is essential to reach optimal therapeutic concentrations in treating SEGA tumours without systemic exposure. These results suggest that the combination of PVA and Everolimus via a rheological synergism enhanced the bioavailability of the hydrophobic drug Everolimus. Physical-chemical characterisation using DSC and FTIR analysis showed compatibility of the drug with the polymer, and the stability of the drug was maintained owing to the high molecular weight of the PVA. The obtained results indicate that the developed PVA/EVR microsphere is highly suitable as a potential drug delivery system with improved bioavailability in treating Subependymal Giant cell astrocytoma (SEGA).

Keywords: drug delivery system, everolimus, freeze-thaw cycles, polyvinyl alcohol

Procedia PDF Downloads 103
2263 Comparison of 18F-FDG and 11C-Methionine PET-CT for Assessment of Response to Neoadjuvant Chemotherapy in Locally Advanced Breast Carcinoma

Authors: Sonia Mahajan Dinesh, Anant Dinesh, Madhavi Tripathi, Vinod Kumar Ramteke, Rajnish Sharma, Anupam Mondal

Abstract:

Background: Neo-adjuvant chemotherapy plays an important role in treatment of breast cancer by decreasing the tumour load and it offers an opportunity to evaluate response of primary tumour to chemotherapy. Standard anatomical imaging modalities are unable to accurately reflect the response to chemotherapy until several cycles of drug treatment have been completed. Metabolic imaging using tracers like 18F-fluorodeoxyglucose (FDG) as a marker of glucose metabolism or amino acid tracers like L-methyl-11C methionine (MET) have potential role for the measurement of treatment response. In this study, our objective was to compare these two PET tracers for assessment of response to neoadjuvant chemotherapy, in locally advanced breast carcinoma. Methods: In our prospective study, 20 female patients with histology proven locally advanced breast carcinoma underwent PET-CT imaging using FDG and MET before and after three cycles of neoadjuvant chemotherapy (CAF regimen). Thereafter, all patients were taken for MRM and the resected specimen was sent for histo-pathological analysis. Tumour response to the neoadjuvant chemotherapy was evaluated by PET-CT imaging using PERCIST criteria and correlated with histological results. Responses calculated were compared for statistical significance using paired t- test. Results: Mean SUVmax for primary lesion in FDG PET and MET PET was 15.88±11.12 and 5.01±2.14 respectively (p<0.001) and for axillary lymph nodes was 7.61±7.31 and 2.75±2.27 respectively (p=0.001). Statistically significant response in primary tumour and axilla was noted on both FDG and MET PET after three cycles of NAC. Complete response in primary tumour was seen in only 1 patient in FDG and 7 patients in MET PET (p=0.001) whereas there was no histological complete resolution of tumor in any patient. Response to therapy in axillary nodes noted on both PET scans were similar (p=0.45) and correlated well with histological findings. Conclusions: For the primary breast tumour, FDG PET has a higher sensitivity and accuracy than MET PET and for axilla both have comparable sensitivity and specificity. FDG PET shows higher target to background ratios so response is better predicted for primary breast tumour and axilla. Also, FDG-PET is widely available and has the advantage of a whole body evaluation in one study.

Keywords: 11C-methionine, 18F-FDG, breast carcinoma, neoadjuvant chemotherapy

Procedia PDF Downloads 496
2262 Concurrent Validity of Synchronous Tele-Audiology Hearing Screening

Authors: Thidilweli Denga, Bessie Malila, Lucretia Petersen

Abstract:

The Coronavirus Disease of 2019 (COVID-19) pandemic should be taken as a wake-up call on the importance of hearing health care considering amongst other things the electronic methods of communication used. The World Health Organization (WHO) estimated that by 2050, there will be more than 2.5 billion people living with hearing loss. These numbers show that more people will need rehabilitation services. Studies have shown that most people living with hearing loss reside in Low-Middle Income Countries (LIMC). Innovative technological solutions such as digital health interventions that can be used to deliver hearing health services to remote areas now exist. Tele-audiology implementation can potentially enable the delivery of hearing loss services to rural and remote areas. This study aimed to establish the concurrent validity of the tele-audiology practice in school-based hearing screening. The study employed a cross-sectional design with a within-group comparison. The portable KUDUwave Audiometer was used to conduct hearing screening from 50 participants (n=50). In phase I of the study, the audiologist conducted on-site hearing screening, while the synchronous remote hearing screening (tele-audiology) using a 5G network was done in phase II. On-site hearing screening results were obtained for the first 25 participants (aged between 5-6 years). The second half started with the synchronous tele-audiology model to avoid order-effect. Repeated sample t-tests compared threshold results obtained in the left and right ears for onsite and remote screening. There was a good correspondence between the two methods with a threshold average within ±5 dB (decibels). The synchronous tele-audiology model has the potential to reduce the audiologists' case overload, while at the same time reaching populations that lack access due to distance, and shortage of hearing professionals in their areas of reach. With reliable and broadband connectivity, tele-audiology delivers the same service quality as the conventional method while reducing the travel costs of audiologists.

Keywords: hearing screening, low-resource communities, portable audiometer, tele-audiology

Procedia PDF Downloads 101
2261 Translanguaging and Cross-languages Analyses in Writing and Oral Production with Multilinguals: a Systematic Review

Authors: Maryvone Cunha de Morais, Lilian Cristine Hübner

Abstract:

Based on a translanguaging theoretical approach, which considers language not as separate entities but as an entire repertoire available to bilingual individuals, this systematic review aimed at analyzing the methods (aims, samples investigated, type of stimuli, and analyses) adopted by studies on translanguaging practices associated with written and oral tasks (separately or integrated) in bilingual education. The PRISMA criteria for systematic reviews were adopted, with the descriptors "translanguaging", "bilingual education" and/or “written and oral tasks" to search in Pubmed/Medline, Lilacs, Eric, Scopus, PsycINFO, and Web of Science databases for articles published between 2017 and 2021. 280 registers were found, and after following the inclusion/exclusion criteria, 24 articles were considered for this analysis. The results showed that translanguaging practices were investigated on four studies focused on written production analyses, ten focused on oral production analysis, whereas ten studies focused on both written and oral production analyses. The majority of the studies followed a qualitative approach, while five studies have attempted to study translanguaging with quantitative statistical measures. Several types of methods were used to investigate translanguaging practices in written and oral production, with different approaches and tools indicating that the methods are still in development. Moreover, the findings showed that students’ interactions have received significant attention, and studies have been developed not just in language classes in bilingual education, but also including diverse educational and theoretical contexts such as Content and Language Integrated Learning, task repetition, Science classes, collaborative writing, storytelling, peer feedback, Speech Act theory and collective thinking, language ideologies, conversational analysis, and discourse analyses. The studies, whether focused either on writing or oral tasks or in both, have portrayed significant research and pedagogical implications, grounded on the view of integrated languages in bi-and multilinguals.

Keywords: bilingual education, oral production, translanguaging, written production

Procedia PDF Downloads 113
2260 Applying Computer Simulation Methods to a Molecular Understanding of Flaviviruses Proteins towards Differential Serological Diagnostics and Therapeutic Intervention

Authors: Sergio Alejandro Cuevas, Catherine Etchebest, Fernando Luis Barroso Da Silva

Abstract:

The flavivirus genus has several organisms responsible for generating various diseases in humans. Special in Brazil, Zika (ZIKV), Dengue (DENV) and Yellow Fever (YFV) viruses have raised great health concerns due to the high number of cases affecting the area during the last years. Diagnostic is still a difficult issue since the clinical symptoms are highly similar. The understanding of their common structural/dynamical and biomolecular interactions features and differences might suggest alternative strategies towards differential serological diagnostics and therapeutic intervention. Due to their immunogenicity, the primary focus of this study was on the ZIKV, DENV and YFV non-structural proteins 1 (NS1) protein. By means of computational studies, we calculated the main physical chemical properties of this protein from different strains that are directly responsible for the biomolecular interactions and, therefore, can be related to the differential infectivity of the strains. We also mapped the electrostatic differences at both the sequence and structural levels for the strains from Uganda to Brazil that could suggest possible molecular mechanisms for the increase of the virulence of ZIKV. It is interesting to note that despite the small changes in the protein sequence due to the high sequence identity among the studied strains, the electrostatic properties are strongly impacted by the pH which also impact on their biomolecular interactions with partners and, consequently, the molecular viral biology. African and Asian strains are distinguishable. Exploring the interfaces used by NS1 to self-associate in different oligomeric states, and to interact with membranes and the antibody, we could map the strategy used by the ZIKV during its evolutionary process. This indicates possible molecular mechanisms that can explain the different immunological response. By the comparison with the known antibody structure available for the West Nile virus, we demonstrated that the antibody would have difficulties to neutralize the NS1 from the Brazilian strain. The present study also opens up perspectives to computationally design high specificity antibodies.

Keywords: zika, biomolecular interactions, electrostatic interactions, molecular mechanisms

Procedia PDF Downloads 111
2259 Cataloguing Beetle Fauna (Insecta: Coleoptera) of India: Estimating Diversity, Distribution, and Taxonomic Challenges

Authors: Devanshu Gupta, Kailash Chandra, Priyanka Das, Joyjit Ghosh

Abstract:

Beetles, in the insect order Coleoptera are the most species-rich group on this planet today. They represent about 40% of the total insect diversity of the world. With a considerable range of landform types including significant mountain ranges, deserts, fertile irrigational plains, and hilly forested areas, India is one of the mega-diverse countries and includes more than 0.1 million faunal species. Despite having rich biodiversity, the efforts to catalogue the beetle diversity of the extant species/taxa reported from India have been less. Therefore, in this paper, the information on the beetle fauna of India is provided based on the data available with the museum collections of Zoological Survey of India and taxa extracted from zoological records and published literature. The species were listed with their valid names, synonyms, type localities, type depositories, and their distribution in states and biogeographic zones of India. The catalogue also incorporates the bibliography on Indian Coleoptera. The exhaustive species inventory, prepared by us include distributional records from Himalaya, Trans Himalaya, Desert, Semi-Arid, Western Ghats, Deccan Peninsula, Gangetic Plains, Northeast, Islands, and Coastal areas of the country. Our study concludes that many of the species are still known from their type localities only, so there is need to revisit and resurvey those collection localities for the taxonomic evaluation of those species. There are species which exhibit single locality records, and taxa-specific biodiversity assessments are required to be undertaken to understand the distributional range of such species. The primary challenge is taxonomic identifications of the species which were described before independence, and the type materials are present in overseas museums. For such species, taxonomic revisions of the different group of beetles are required to solve the problems of identification and classification.

Keywords: checklist, taxonomy, museum collections, biogeographic zones

Procedia PDF Downloads 238
2258 Neural Correlates of Attention Bias to Threat during the Emotional Stroop Task in Schizophrenia

Authors: Camellia Al-Ibrahim, Jenny Yiend, Sukhwinder S. Shergill

Abstract:

Background: Attention bias to threat play a role in the development, maintenance, and exacerbation of delusional beliefs in schizophrenia in which patients emphasize the threatening characteristics of stimuli and prioritise them for processing. Cognitive control deficits arise when task-irrelevant emotional information elicits attentional bias and obstruct optimal performance. This study is investigating neural correlates of interference effect of linguistic threat and whether these effects are independent of delusional severity. Methods: Using an event-related functional magnetic resonance imaging (fMRI), neural correlates of interference effect of linguistic threat during the emotional Stroop task were investigated and compared patients with schizophrenia with high (N=17) and low (N=16) paranoid symptoms and healthy controls (N=20). Participants were instructed to identify the font colour of each word presented on the screen as quickly and accurately as possible. Stimuli types vary between threat-relevant, positive and neutral words. Results: Group differences in whole brain effects indicate decreased amygdala activity in patients with high paranoid symptoms compared with low paranoid patients and healthy controls. Regions of interest analysis (ROI) validated our results within the amygdala and investigated changes within the striatum showing a pattern of reduced activation within the clinical group compared to healthy controls. Delusional severity was associated with significant decreased neural activity in the striatum within the clinical group. Conclusion: Our findings suggest that the emotional interference mediated by the amygdala and striatum may reduce responsiveness to threat-related stimuli in schizophrenia and that attenuation of fMRI Blood-oxygen-level dependent (BOLD) signal within these areas might be influenced by the severity of delusional symptoms.

Keywords: attention bias, fMRI, Schizophrenia, Stroop

Procedia PDF Downloads 185
2257 Discerning Divergent Nodes in Social Networks

Authors: Mehran Asadi, Afrand Agah

Abstract:

In data mining, partitioning is used as a fundamental tool for classification. With the help of partitioning, we study the structure of data, which allows us to envision decision rules, which can be applied to classification trees. In this research, we used online social network dataset and all of its attributes (e.g., Node features, labels, etc.) to determine what constitutes an above average chance of being a divergent node. We used the R statistical computing language to conduct the analyses in this report. The data were found on the UC Irvine Machine Learning Repository. This research introduces the basic concepts of classification in online social networks. In this work, we utilize overfitting and describe different approaches for evaluation and performance comparison of different classification methods. In classification, the main objective is to categorize different items and assign them into different groups based on their properties and similarities. In data mining, recursive partitioning is being utilized to probe the structure of a data set, which allow us to envision decision rules and apply them to classify data into several groups. Estimating densities is hard, especially in high dimensions, with limited data. Of course, we do not know the densities, but we could estimate them using classical techniques. First, we calculated the correlation matrix of the dataset to see if any predictors are highly correlated with one another. By calculating the correlation coefficients for the predictor variables, we see that density is strongly correlated with transitivity. We initialized a data frame to easily compare the quality of the result classification methods and utilized decision trees (with k-fold cross validation to prune the tree). The method performed on this dataset is decision trees. Decision tree is a non-parametric classification method, which uses a set of rules to predict that each observation belongs to the most commonly occurring class label of the training data. Our method aggregates many decision trees to create an optimized model that is not susceptible to overfitting. When using a decision tree, however, it is important to use cross-validation to prune the tree in order to narrow it down to the most important variables.

Keywords: online social networks, data mining, social cloud computing, interaction and collaboration

Procedia PDF Downloads 135
2256 Prediction on the Pursuance of Separation of Catalonia from Spain

Authors: Francis Mark A. Fernandez, Chelca Ubay, Armithan Suguitan

Abstract:

Regions or provinces in a definite state certainly contribute to the economy of their mainland. These regions or provinces are the ones supplying the mainland with different resources and assets. Thus, with a certain region separating from the mainland would indeed impinge the heart of an entire state to develop and expand. With these, the researchers decided to study on the effects of the separation of one’s region to its mainland and the consequences that will take place if the mainland would rule out the region to separate from them. The researchers wrote this paper to present the causes of the separation of Catalonia from Spain and the prediction regarding the pursuance of this region to revolt from its mainland, Spain. In conducting this research, the researchers utilized two analyses, namely: qualitative and quantitative. In qualitative, numerous of information regarding the existing experiences of the citizens of Catalonia were gathered by the authors to give certainty to the prediction of the researchers. Besides this undertaking, the researchers will also gather needed information and figures through books, journals and the published news and reports. In addition, to further support this prediction under qualitative analysis, the researchers intended to operate the Phenomenological research in which the examiners will exemplify the lived experiences of each citizen in Catalonia. Moreover, the researchers will utilize one of the types of Phenomenological research which is hermeneutical phenomenology by Van Manen. In quantitative analysis, the researchers utilized the regression analysis in which it will ascertain the causality in an underlying theory in understanding the relationship of the variables. The researchers assigned and identified different variables, wherein the dependent variable or the y which represents the prediction of the researchers, the independent variable however or the x represents the arising problems that grounds the partition of the region, the summation of the independent variable or the ∑x represents the sum of the problem and finally the summation of the dependent variable or the ∑y is the result of the prediction. With these variables, using the regression analysis, the researchers will be able to show the connections and how a single variable could affect the other variables. From these approaches, the prediction of the researchers will be specified. This research could help different states dealing with this kind of problem. It will further help certain states undergoing this problem by analyzing the causes of these insurgencies and the effects on it if it will obstruct its region to consign their full-pledge autonomy.

Keywords: autonomy, liberty, prediction, separation

Procedia PDF Downloads 236
2255 Simulation of the FDA Centrifugal Blood Pump Using High Performance Computing

Authors: Mehdi Behbahani, Sebastian Rible, Charles Moulinec, Yvan Fournier, Mike Nicolai, Paolo Crosetto

Abstract:

Computational Fluid Dynamics blood-flow simulations are increasingly used to develop and validate blood-contacting medical devices. This study shows that numerical simulations can provide additional and accurate estimates of relevant hemodynamic indicators (e.g., recirculation zones or wall shear stresses), which may be difficult and expensive to obtain from in-vivo or in-vitro experiments. The most recent FDA (Food and Drug Administration) benchmark consisted of a simplified centrifugal blood pump model that contains fluid flow features as they are commonly found in these devices with a clear focus on highly turbulent phenomena. The FDA centrifugal blood pump study is composed of six test cases with different volumetric flow rates ranging from 2.5 to 7.0 liters per minute, pump speeds, and Reynolds numbers ranging from 210,000 to 293,000. Within the frame of this study different turbulence models were tested including RANS models, e.g. k-omega, k-epsilon and a Reynolds Stress Model (RSM) and, LES. The partitioners Hilbert, METIS, ParMETIS and SCOTCH were used to create an unstructured mesh of 76 million elements and compared in their efficiency. Computations were performed on the JUQUEEN BG/Q architecture applying the highly parallel flow solver Code SATURNE and typically using 32768 or more processors in parallel. Visualisations were performed by means of PARAVIEW. Different turbulence models including all six flow situations could be successfully analysed and validated against analytical considerations and from comparison to other data-bases. It showed that an RSM represents an appropriate choice with respect to modeling high-Reynolds number flow cases. Especially, the Rij-SSG (Speziale, Sarkar, Gatzki) variant turned out to be a good approach. Visualisation of complex flow features could be obtained and the flow situation inside the pump could be characterized.

Keywords: blood flow, centrifugal blood pump, high performance computing, scalability, turbulence

Procedia PDF Downloads 372
2254 Open Innovation in SMEs: A Multiple Case Study of Collaboration between Start-ups and Craft Enterprises

Authors: Carl-Philipp Valentin Beichert, Marcel Seger

Abstract:

Digital transformation and climate change require small and medium-sized enterprises (SME) to rethink their way of doing business. Inter-firm collaboration is recognized as helpful means of promoting innovation and competitiveness. In this context, collaborations with start-ups offer valuable opportunities through their innovative products, services, and business models. SMEs, and in particular German craft enterprises, play an important role in the country’s society and economy. Companies in this heterogeneous economic sector have unique characteristics and are limited in their ability to innovate due to their small size and lack of resources. Collaborating with start-ups could help to overcome these shortcomings. To investigate how collaborations emerge and what factors are decisive to successfully drive collaboration, we apply an explorative, qualitative research design. A sample of ten case studies was selected, with the collaboration between a start-up and a craft enterprise forming the unit of analysis. Semi-structured interviews with 20 company representatives allow for a two-sided perspective on the respective collaboration. The interview data is enriched by publicly available data and three expert interviews. As a result, objectives, initiation practices, applied collaboration types, barriers, as well as key success factors could be identified. The results indicate a three-phase collaboration process comprising an initiation, concept, and partner phase (ICP). The ICP framework proposed accordingly highlights the success factors (personal fit, communication, expertise, structure, network) for craft enterprises and start-ups for each collaboration phase. The role of a mediator in the start-up company, with strong expertise in the respective craft sector, is considered an important lever for overcoming barriers such as cultural and communication differences. The ICP framework thus provides promising directions for further research and can help practitioners establish successful collaborations.

Keywords: open innovation, SME, craft businesses, startup collaboration, qualitative research

Procedia PDF Downloads 79