Search results for: non singular evolution of universe
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2154

Search results for: non singular evolution of universe

1614 Structural Analysis and Evolution of 18th Century Ottoman Imperial Mosques (1750-1799) in Comparison with the Classical Period Examples

Authors: U. Demir

Abstract:

18th century which is the period of 'change' in the Ottoman Empire, affects the architecture as well, where the Classical period is left behind, architecture is differentiated in the form language. This change is especially noticeable in monumental buildings and thus manifested itself in the mosques. But, is it possible to talk about the structural context of the 'change' which has been occurred in decoration? The aim of this study is to investigate the changes and classical relations of the 18th century mosques through plan schedules and structure systems. This study focuses on the monumental mosques constructed during the reign of the three sultans who ruled in the second half of the century (Mustafa the 3rd 1757-1774, Abdülhamid the 1st 1774-1789 and Selim the 3rd). According to their construction years these are 'Ayazma, Laleli, Zeyneb Sultan, Fatih, Beylerbeyi, Şebsefa Kadın, Eyüb Sultan, Mihrişah Valide Sultan and Üsküdar-Selimiye' mosques. As a plan scheme, four mosques have a square or close to a rectangular square scheme, while the others have a rectangle scheme and showing the longitudinal development of the mihrab axis. This situation is widespread throughout the period. In addition to the longitudinal development plan, which is the general characteristic of the 18th century mosques, the use of the classical plan schemes continued in the same direction. Spatialization of the mihrab area was applied to the five mosques while other mosques were applied as niches on the wall surface. This situation is widespread in the period of the second half of the century. In the classical period, the lodges may be located at the back of the mosques interior, not interfering with the main worship area. In the period, the lodges were withdrawn from the main worship area. They are separated from the main interior with their own structural and covering systems. The plans seem to be formed as a result of the addition of lodge parts to the northern part of the Classical period mosques. The 18th century mosques are the constructions where the change of the architectural language and style can be observed easily. This change and the break from the classical period manifest themselves quickly in the structural elements, wall surface decorations, pencil work designs, small scale decor elements, motifs. The speed and intensity of change in the decor does not occur the same as in structural context. The mosque construction rules from the traditional and classical era still continues in the century. While some mosque structures have a plan which is inherited from the classical successor, some of were constructed with the same classical period rules. Nonetheless, the location and transformation of the lodges, which are affecting the interior design, are noteworthy. They provide a significant transition on the way to the new language of the mosque design that will be experienced in the next century. It is intended to draw attention to the structural evolution of the 18th century Ottoman architecture through the royal mosques within the scope of this conference.

Keywords: mosque structure, Ottoman architecture, structural evolution, 18th century architecture

Procedia PDF Downloads 194
1613 Blood Volume Pulse Extraction for Non-Contact Photoplethysmography Measurement from Facial Images

Authors: Ki Moo Lim, Iman R. Tayibnapis

Abstract:

According to WHO estimation, 38 out of 56 million (68%) global deaths in 2012, were due to noncommunicable diseases (NCDs). To avert NCD, one of the solutions is early detection of diseases. In order to do that, we developed 'U-Healthcare Mirror', which is able to measure vital sign such as heart rate (HR) and respiration rate without any physical contact and consciousness. To measure HR in the mirror, we utilized digital camera. The camera records red, green, and blue (RGB) discoloration from user's facial image sequences. We extracted blood volume pulse (BVP) from the RGB discoloration because the discoloration of the facial skin is accordance with BVP. We used blind source separation (BSS) to extract BVP from the RGB discoloration and adaptive filters for removing noises. We utilized singular value decomposition (SVD) method to implement the BSS and the adaptive filters. HR was estimated from the obtained BVP. We did experiment for HR measurement by using our method and previous method that used independent component analysis (ICA) method. We compared both of them with HR measurement from commercial oximeter. The experiment was conducted under various distance between 30~110 cm and light intensity between 5~2000 lux. For each condition, we did measurement 7 times. The estimated HR showed 2.25 bpm of mean error and 0.73 of pearson correlation coefficient. The accuracy has improved compared to previous work. The optimal distance between the mirror and user for HR measurement was 50 cm with medium light intensity, around 550 lux.

Keywords: blood volume pulse, heart rate, photoplethysmography, independent component analysis

Procedia PDF Downloads 324
1612 Feasibility of Weakly Interacting Massive Particles as Dark Matter Candidates: Exploratory Study on The Possible Reasons for Lack of WIMP Detection

Authors: Sloka Bhushan

Abstract:

Dark matter constitutes a majority of matter in the universe, yet very little is known about it due to its extreme lack of interaction with regular matter and the fundamental forces. Weakly Interacting Massive Particles, or WIMPs, have been contested to be one of the strongest candidates for dark matter due to their promising theoretical properties. However, various endeavors to detect these elusive particles have failed. This paper explores the various particles which may be WIMPs and the detection techniques being employed to detect WIMPs (such as underground detectors, LHC experiments, and so on). There is a special focus on the reasons for the lack of detection of WIMPs so far, and the possibility of limits in detection being a reason for the lack of physical evidence of the existence of WIMPs. This paper also explores possible inconsistencies within the WIMP particle theory as a reason for the lack of physical detection. There is a brief review on the possible solutions and alternatives to these inconsistencies. Additionally, this paper also reviews the supersymmetry theory and the possibility of the supersymmetric neutralino (A possible WIMP particle) being detectable. Lastly, a review on alternate candidates for dark matter such as axions and MACHOs has been conducted. The explorative study in this paper is conducted through a series of literature reviews.

Keywords: dark matter, particle detection, supersymmetry, weakly interacting massive particles

Procedia PDF Downloads 134
1611 The Effect of Diapirs on the Geometry and Evolution of the Ait Ourir Basin, High Atlas Mountains of Marrakesh, Morocco

Authors: Hadach Fatiha, Algouti Ahmed, Algouti Abdellah, Jdaba Naji, Es-Sarrar Othman, Mourabit Zahra

Abstract:

This paper investigates the structure and evolution of diapirism in the Ait Ourir basin, located in the High Atlas of Marrakesh, using structural and sedimentological fieldwork integrated with field mapping. A tectonic-sedimentological study of the Mesozoic cover of the Ait Ourir basin area revealed that these units were subjected to important saccadic halokinetic activity, reflected by anticline structures associated with regional faults that created several synclinal mini-basins. However, the lack of seismic coverage in the study area makes the proposed interpretation based on extrapolations of information observed on the surface. In this work, we suggest that faults and salt activity led to the formation of different structures within the studied area. The growth of the Triassic evaporites at different stages during the Mesozoic is reflected by progressive and local unconformities, recorded as having different ages. These structures created high diapiric zones with reduced sedimentation, showing abrupt lateral thickness variations in several places where this activity was occurring; this is clearly defined within the Wanina and Jbel Sour’s mini-basins, where the Senonian was observed to rest at an angular unconformity over the entire sedimentary cover encompassing the time period from the Liassic to the Turonian. The diapirism associated with the major faults, especially encountered between the basins, is often accompanied by late Triassic volcanic material. This diapir-fault relationship resulted in shallow and often depocentric zones in a pull-apart system within a distensive context.

Keywords: diapir, evaporites, faults, pull-apart, Mesozoic cover, Ait Ourir, western High Atlas, Morocco

Procedia PDF Downloads 64
1610 Understanding the Damage Evolution and the Risk of Failure of Pyrrhotite Containing Concrete Foundations

Authors: Marisa Chrysochoou, James Mahoney, Kay Wille

Abstract:

Pyrrhotite is an iron-sulfide mineral which releases sulfuric acid when exposed to water and oxygen. The presence of this mineral in concrete foundations across Connecticut and Massachusetts in the US is causing in some cases premature failure. This has resulted in a devastating crisis for all parties affected by this type of failure which can take up to 15-25 years before internal damage becomes visible on the surface. This study shares laboratory results aimed to investigate the fundamental mechanisms of pyrrhotite reaction and to further the understanding of its deterioration kinetics within concrete. This includes the following analyses: total sulfur, wavelength dispersive X-ray fluorescence, expansion, reaction rate combined with ion-chromatography, as well as damage evolution using electro-chemical acceleration. This information is coupled to a statistical analysis of over 150 analyzed concrete foundations. Those samples were obtained and process using a developed and validated sampling method that is minimally invasive to the foundation in use, provides representative samples of the concrete matrix across the entire foundation, and is time and cost-efficient. The processed samples were then analyzed using a developed modular testing method based on total sulfur and wavelength dispersive X-ray fluorescence analysis to quantify the amount of pyrrhotite. As part of the statistical analysis the results were grouped into the following three categories: no damage observed and no pyrrhotite detected, no damage observed and pyrrhotite detected and damaged observed and pyrrhotite detected. As expected, a strong correlation between amount of pyrrhotite, age of the concrete and damage is observed. Information from the laboratory investigation and from the statistical analysis of field samples will aid in forming a scientific basis to support the decision process towards sustainable financial and administrative solutions by state and local stakeholders.

Keywords: concrete, pyrrhotite, risk of failure, statistical analysis

Procedia PDF Downloads 64
1609 Correction of Frequent English Writing Errors by Using Coded Indirect Corrective Feedback and Error Treatment

Authors: Chaiwat Tantarangsee

Abstract:

The purposes of this study are: 1) to study the frequent English writing errors of students registering the course: Reading and Writing English for Academic Purposes II, and 2) to find out the results of writing error correction by using coded indirect corrective feedback and writing error treatments. Samples include 28 2nd year English Major students, Faculty of Education, Suan Sunandha Rajabhat University. Tool for experimental study includes the lesson plan of the course; Reading and Writing English for Academic Purposes II, and tool for data collection includes 4 writing tests of short texts. The research findings disclose that frequent English writing errors found in this course comprise 7 types of grammatical errors, namely Fragment sentence, Subject-verb agreement, Wrong form of verb tense, Singular or plural noun endings, Run-ons sentence, Wrong form of verb pattern and Lack of parallel structure. Moreover, it is found that the results of writing error correction by using coded indirect corrective feedback and error treatment reveal the overall reduction of the frequent English writing errors and the increase of students’ achievement in the writing of short texts with the significance at .05.

Keywords: coded indirect corrective feedback, error correction, error treatment, frequent English writing errors

Procedia PDF Downloads 232
1608 LTE Performance Analysis in the City of Bogota Northern Zone for Two Different Mobile Broadband Operators over Qualipoc

Authors: Víctor D. Rodríguez, Edith P. Estupiñán, Juan C. Martínez

Abstract:

The evolution in mobile broadband technologies has allowed to increase the download rates in users considering the current services. The evaluation of technical parameters at the link level is of vital importance to validate the quality and veracity of the connection, thus avoiding large losses of data, time and productivity. Some of these failures may occur between the eNodeB (Evolved Node B) and the user equipment (UE), so the link between the end device and the base station can be observed. LTE (Long Term Evolution) is considered one of the IP-oriented mobile broadband technologies that work stably for data and VoIP (Voice Over IP) for those devices that have that feature. This research presents a technical analysis of the connection and channeling processes between UE and eNodeB with the TAC (Tracking Area Code) variables, and analysis of performance variables (Throughput, Signal to Interference and Noise Ratio (SINR)). Three measurement scenarios were proposed in the city of Bogotá using QualiPoc, where two operators were evaluated (Operator 1 and Operator 2). Once the data were obtained, an analysis of the variables was performed determining that the data obtained in transmission modes vary depending on the parameters BLER (Block Error Rate), performance and SNR (Signal-to-Noise Ratio). In the case of both operators, differences in transmission modes are detected and this is reflected in the quality of the signal. In addition, due to the fact that both operators work in different frequencies, it can be seen that Operator 1, despite having spectrum in Band 7 (2600 MHz), together with Operator 2, is reassigning to another frequency, a lower band, which is AWS (1700 MHz), but the difference in signal quality with respect to the establishment with data by the provider Operator 2 and the difference found in the transmission modes determined by the eNodeB in Operator 1 is remarkable.

Keywords: BLER, LTE, network, qualipoc, SNR.

Procedia PDF Downloads 112
1607 Impacts on Atmospheric Mercury from Changes in Climate, Land Use, Land Cover, and Wildfires

Authors: Shiliang Wu, Huanxin Zhang, Aditya Kumar

Abstract:

There have been increasing concerns on atmospheric mercury as a toxic and bioaccumulative pollutant in the global environment. Global change, including changes in climate change, land use, land cover and wildfires activities can all have significant impacts on atmospheric mercury. In this study, we use a global chemical transport model (GEOS-Chem) to examine the potential impacts from global change on atmospheric mercury. All of these factors in the context of global change are found to have significant impacts on the long-term evolution of atmospheric mercury and can substantially alter the global source-receptor relationships for mercury. We also estimate the global Hg emissions from wildfires for present-day and the potential impacts from the 2000-2050 changes in climate, land use and land cover and Hg anthropogenic emissions by combining statistical analysis with global data on vegetation type and coverage as well as fire activities. Present global Hg wildfire emissions are estimated to be 612 Mg year-1. Africa is the dominant source region (43.8% of global emissions), followed by Eurasia (31%) and South America (16.6%). We find significant perturbations to wildfire emissions of Hg in the context of global change, driven by the projected changes in climate, land use and land cover and Hg anthropogenic emissions. 2000-2050 climate change could increase Hg emissions by 14% globally. Projected changes in land use by 2050 could decrease the global Hg emissions from wildfires by 13% mainly driven by a decline in African emissions due to significant agricultural land expansion. Future land cover changes could lead to significant increases in Hg emissions over some regions (+32% North America, +14% Africa, +13% Eurasia). Potential enrichment of terrestrial ecosystems in 2050 in response to changes in Hg anthropogenic emissions could increase Hg wildfire emissions both globally (+28%) and regionally. Our results indicate that the future evolution of climate, land use and land cover and Hg anthropogenic emissions are all important factors affecting Hg wildfire emissions in the coming decades.

Keywords: climate change, land use, land cover, wildfires

Procedia PDF Downloads 319
1606 Approaches for Minimizing Radioactive Tritium and ¹⁴C in Advanced High Temperature Gas-Cooled Reactors

Authors: Longkui Zhu, Zhengcao Li

Abstract:

High temperature gas-cooled reactors (HTGRs) are considered as one of the next-generation advanced nuclear reactors, in which porous nuclear graphite is used as neutron moderators, reflectors, structure materials, and cooled by inert helium. Radioactive tritium and ¹⁴C are generated in terms of reactions of thermal neutrons and ⁶Li, ¹⁴N, ¹⁰B impurely within nuclear graphite and the coolant during HTGRs operation. Currently, hydrogen and nitrogen diffusion behavior together with nuclear graphite microstructure evolution were investigated to minimize the radioactive waste release, using thermogravimetric analysis, X-ray computed tomography, the BET and mercury standard porosimetry methods. It is found that the peak value of graphite weight loss emerged at 573-673 K owing to nitrogen diffusion from graphite pores to outside when the system was subjected to vacuum. Macropore volume became larger while porosity for mesopores was smaller with temperature ranging from ambient temperature to 1073 K, which was primarily induced by coalescence of the subscale pores. It is suggested that the porous nuclear graphite should be first subjected to vacuum at 573-673 K to minimize the nitrogen and the radioactive 14°C before operation in HTGRs. Then, results on hydrogen diffusion show that the diffusible hydrogen and tritium could permeate into the coolant with diffusion coefficients of > 0.5 × 10⁻⁴ cm²·s⁻¹ at 50 bar. As a consequence, the freshly-generated diffusible tritium could release quickly to outside once formed, and an effective approach for minimizing the amount of radioactive tritium is to make the impurity contents extremely low in nuclear graphite and the coolant. Besides, both two- and three-dimensional observations indicate that macro and mesopore volume along with total porosity decreased with temperature at 50 bar on account of synergistic effects of applied compression strain, sharpened pore morphology, and non-uniform temperature distribution.

Keywords: advanced high temperature gas-cooled reactor, hydrogen and nitrogen diffusion, microstructure evolution, nuclear graphite, radioactive waste management

Procedia PDF Downloads 309
1605 Electrochemical Top-Down Synthesis of Nanostructured Support and Catalyst Materials for Energy Applications

Authors: Peter M. Schneider, Batyr Garlyyev, Sebastian A. Watzele, Aliaksandr S. Bandarenka

Abstract:

Functional nanostructures such as nanoparticles are a promising class of materials for energy applications due to their unique properties. Bottom-up synthetic routes for nanostructured materials often involve multiple synthesis steps and the use of surfactants, reducing agents, or stabilizers. This results in complex and extensive synthesis protocols. In recent years, a novel top-down synthesis approach to form metal nanoparticles has been established, in which bulk metal wires are immersed in an electrolyte (primarily alkali earth metal based) and subsequently subjected to a high alternating potential. This leads to the generation of nanoparticles dispersed in the electrolyte. The main advantage of this facile top-down approach is that there are no reducing agents, surfactants, or precursor solutions. The complete synthesis can be performed in one pot involving one main step with consequent washing and drying of the nanoparticles. More recent studies investigated the effect of synthesis parameters such as potential amplitude, frequency, electrolyte composition, and concentration on the size and shape of the nanoparticles. Here, we investigate the electrochemical erosion of various metal wires such as Ti, Pt, Pd, and Sn in various electrolyte compositions via this facile top-down technique and its experimental optimization to successfully synthesize nanostructured materials for various energy applications. As an example, for Pt and Pd, homogeneously distributed nanoparticles on carbon support can be obtained. These materials can be used as electrocatalyst materials for the oxygen reduction reaction (ORR) and hydrogen evolution reaction (HER), respectively. In comparison, the top-down erosion of Sn wires leads to the formation of nanoparticles, which have great potential as oxygen evolution reaction (OER) support materials. The application of the technique on Ti wires surprisingly leads to the formation of nanowires, which show a high surface area and demonstrate great potential as an alternative support material to carbon.

Keywords: ORR, electrochemistry, electrocatalyst, synthesis

Procedia PDF Downloads 76
1604 Breast Cancer: The Potential of miRNA for Diagnosis and Treatment

Authors: Abbas Pourreza

Abstract:

MicroRNAs (miRNAs) are small single-stranded non-coding RNAs. They are almost 18-25 nucleotides long and very conservative through evolution. They are involved in adjusting the expression of numerous genes due to the existence of a complementary region, generally in the 3' untranslated regions (UTR) of target genes, against particular mRNAs in the cell. Also, miRNAs have been proven to be involved in cell development, differentiation, proliferation, and apoptosis. More than 2000 miRNAs have been recognized in human cells, and these miRNAs adjust approximately one-third of all genes in human cells. Dysregulation of miRNA originated from abnormal DNA methylation patterns of the locus, cause to down-regulated or overexpression of miRNAs, and it may affect tumor formation or development of it. Breast cancer (BC) is the most commonly identified cancer, the most prevalent cancer (23%), and the second-leading (14%) mortality in all types of cancer in females. BC can be classified based on the status (+/−) of the hormone receptors, including estrogen receptor (ER), progesterone receptor (PR), and the Receptor tyrosine-protein kinase erbB-2 (ERBB2 or HER2). Currently, there are four main molecular subtypes of BC: luminal A, approximately 50–60 % of BCs; luminal B, 10–20 %; HER2 positive, 15–20 %, and 10–20 % considered Basal (triple-negative breast cancer (TNBC)) subtype. Aberrant expression of miR-145, miR-21, miR-10b, miR-125a, and miR-206 was detected by Stem-loop real-time RT-PCR in BC cases. Breast tumor formation and development may result from down-regulation of a tumor suppressor miRNA such as miR-145, miR-125a, and miR-206 and/or overexpression of an oncogenic miRNA such as miR-21 and miR-10b. MiR-125a, miR-206, miR-145, miR-21, and miR-10b are hugely predicted to be new tumor markers for the diagnosis and prognosis of BC. MiR-21 and miR-125a could play a part in the treatment of HER-2-positive breast cancer cells, while miR-145 and miR-206 could speed up the evolution of cure techniques for TNBC. To conclude, miRNAs will be presented as hopeful molecules to be used in the primary diagnosis, prognosis, and treatment of BC and battle as opposed to its developed drug resistance.

Keywords: breast cancer, HER2 positive, miRNA, TNBC

Procedia PDF Downloads 88
1603 Natural Factors of Interannual Variability of Winter Precipitation over the Altai Krai

Authors: Sukovatov K.Yu., Bezuglova N.N.

Abstract:

Winter precipitation variability over the Altai Krai was investigated by retrieving temporal patterns. The spectral singular analysis was used to describe the variance distribution and to reduce the precipitation data into a few components (modes). The associated time series were related to large-scale atmospheric and oceanic circulation indices by using lag cross-correlation and wavelet-coherence analysis. GPCC monthly precipitation data for rectangular field limited by 50-550N, 77-880E and monthly climatological circulation index data for the cold season were used to perform SSA decomposition and retrieve statistics for analyzed parameters on the time period 1951-2017. Interannual variability of winter precipitation over the Altai Krai are mostly caused by three natural factors: intensity variations of momentum exchange between mid and polar latitudes over the North Atlantic (explained variance 11.4%); wind speed variations in equatorial stratosphere (quasi-biennial oscillation, explained variance 15.3%); and surface temperature variations for equatorial Pacific sea (ENSO, explained variance 2.8%). It is concluded that under the current climate conditions (Arctic amplification and increasing frequency of meridional processes in mid-latitudes) the second and the third factors are giving more significant contribution into explained variance of interannual variability for cold season atmospheric precipitation over the Altai Krai than the first factor.

Keywords: interannual variability, winter precipitation, Altai Krai, wavelet-coherence

Procedia PDF Downloads 178
1602 Using Squeezed Vacuum States to Enhance the Sensitivity of Ground Based Gravitational Wave Interferometers beyond the Standard Quantum Limit

Authors: Giacomo Ciani

Abstract:

This paper reviews the impact of quantum noise on modern gravitational wave interferometers and explains how squeezed vacuum states are used to push the noise below the standard quantum limit. With the first detection of gravitational waves from a pair of colliding black holes in September 2015 and subsequent detections including that of gravitational waves from a pair of colliding neutron stars, the ground-based interferometric gravitational wave observatories LIGO and VIRGO have opened the era of gravitational-wave and multi-messenger astronomy. Improving the sensitivity of the detectors is of paramount importance to increase the number and quality of the detections, fully exploiting this new information channel about the universe. Although still in the commissioning phase and not at nominal sensitivity, these interferometers are designed to be ultimately limited by a combination of shot noise and quantum radiation pressure noise, which define an envelope known as the standard quantum limit. Despite the name, this limit can be beaten with the use of advanced quantum measurement techniques, with the use of squeezed vacuum states being currently the most mature and promising. Different strategies for implementation of the technology in the large-scale detectors, in both their frequency-independent and frequency-dependent variations, are presented, together with an analysis of the main technological issues and expected sensitivity gain.

Keywords: gravitational waves, interferometers, squeezed vacuum, standard quantum limit

Procedia PDF Downloads 147
1601 Collocation Assessment between GEO and GSO Satellites

Authors: A. E. Emam, M. Abd Elghany

Abstract:

The change in orbit evolution between collocated satellites (X, Y) inside +/-0.09 ° E/W and +/- 0.07 ° N/S cluster, after one of these satellites is placed in an inclined orbit (satellite X) and the effect of this change in the collocation safety inside the cluster window has been studied and evaluated. Several collocation scenarios had been studied in order to adjust the location of both satellites inside their cluster to maximize the separation between them and safe the mission.

Keywords: satellite, GEO, collocation, risk assessment

Procedia PDF Downloads 392
1600 Carbon Fiber Manufacturing Conditions to Improve Interfacial Adhesion

Authors: Filip Stojcevski, Tim Hilditch, Luke Henderson

Abstract:

Although carbon fibre composites are becoming ever more prominent in the engineering industry, interfacial failure still remains one of the most common limitations to material performance. Carbon fiber surface treatments have played a major role in advancing composite properties however research into the influence of manufacturing variables on a fiber manufacturing line is lacking. This project investigates the impact of altering carbon fiber manufacturing conditions on a production line (specifically electrochemical oxidization and sizing variables) to assess fiber-matrix adhesion. Pristine virgin fibers were manufactured and interfacial adhesion systematically assessed from a microscale (single fiber) to a mesoscale (12k tow), and ultimately a macroscale (laminate). Correlations between interfacial shear strength (IFSS) at each level is explored as a function of known interfacial bonding mechanisms; namely mechanical interlocking, chemical adhesion and fiber wetting. Impact of these bonding mechanisms is assessed through extensive mechanical, topological and chemical characterisation. They are correlated to performance as a function of IFSS. Ultimately this study provides a bottoms up approach to improving composite laminates. By understanding the scaling effects from a singular fiber to a composite laminate and linking this knowledge to specific bonding mechanisms, material scientists can make an informed decision on the manufacturing conditions most beneficial for interfacial adhesion.

Keywords: carbon fibers, interfacial adhesion, surface treatment, sizing

Procedia PDF Downloads 260
1599 Correction of Frequent English Writing Errors by Using Coded Indirect Corrective Feedback and Error Treatment: The Case of Reading and Writing English for Academic Purposes II

Authors: Chaiwat Tantarangsee

Abstract:

The purposes of this study are 1) to study the frequent English writing errors of students registering the course: Reading and Writing English for Academic Purposes II, and 2) to find out the results of writing error correction by using coded indirect corrective feedback and writing error treatments. Samples include 28 2nd year English Major students, Faculty of Education, Suan Sunandha Rajabhat University. Tool for experimental study includes the lesson plan of the course; Reading and Writing English for Academic Purposes II, and tool for data collection includes 4 writing tests of short texts. The research findings disclose that frequent English writing errors found in this course comprise 7 types of grammatical errors, namely Fragment sentence, Subject-verb agreement, Wrong form of verb tense, Singular or plural noun endings, Run-ons sentence, Wrong form of verb pattern and Lack of parallel structure. Moreover, it is found that the results of writing error correction by using coded indirect corrective feedback and error treatment reveal the overall reduction of the frequent English writing errors and the increase of students’ achievement in the writing of short texts with the significance at .05.

Keywords: coded indirect corrective feedback, error correction, error treatment, English writing

Procedia PDF Downloads 300
1598 Genome-Wide Assessment of Putative Superoxide Dismutases in Unicellular and Filamentous Cyanobacteria

Authors: Shivam Yadav, Neelam Atri

Abstract:

Cyanobacteria are photoautotrophic prokaryotes able to grow in diverse ecological habitats, originated 2.5 - 3.5 billion years ago and brought oxygenic photosynthesis. Since then superoxide dismutases (SODs) acquired great significance due to their ability to catalyze detoxification of byproducts of oxygenic photosynthesis, i.e. superoxide radicals. Sequence information from several cyanobacterial genomes offers a unique opportunity to conduct a comprehensive comparative analysis of the superoxide dismutases family. In the present study, we extracted information regarding SODs from species of sequenced cyanobacteria and investigated their diversity, conservation, domain structure, and evolution. 144 putative SOD homologues were identified. SODs are present in all cyanobacterial species reflecting their significant role in survival. However, their distribution varies, fewer in unicellular marine strains whereas abundant in filamentous nitrogen-fixing cyanobacteria. Motifs and invariant amino acids typical in eukaryotic SODs were conserved well in these proteins. These SODs were classified into three major families according to their domain structures. Interestingly, they lack additional domains as found in proteins of other family. Phylogenetic relationships correspond well with phylogenies based on 16S rRNA and clustering occurs on the basis of structural characteristics such as domain organization. Similar conserved motifs and amino acids indicate that cyanobacterial SODs make use of a similar catalytic mechanism as eukaryotic SODs. Gene gain-and-loss is insignificant during SOD evolution as evidenced by absence of additional domain. This study has not only examined an overall background of sequence-structure-function interactions for the SOD gene family but also revealed variation among SOD distribution based on ecophysiological and morphological characters.

Keywords: comparative genomics, cyanobacteria, phylogeny, superoxide dismutases

Procedia PDF Downloads 128
1597 Applied Mathematical Approach on “Baut” Special High Performance Metal Aggregate by Formulation and Equations

Authors: J. R. Bhalla, Gautam, Gurcharan Singh, Sanjeev Naval

Abstract:

Mathematics is everywhere behind the every things on the earth as well as in the universe. Predynastic Egyptians of the 5th millennium BC pictorially represented geometric designs. Now a day’s we can made and apply an equation on a complex geometry through applied mathematics. Here we work and focus on to create a formula which apply in the field of civil engineering in new concrete technology. In this paper our target is to make a formula which is applied on “BAUT” Metal Aggregate. In this paper our approach is to make formulation and equation on special “BAUT” Metal Aggregate by Applied Mathematical Study Case 1. BASIC PHYSICAL FORMULATION 2. ADVANCE EQUATION which shows the mechanical performance of special metal aggregates for concrete technology. In case 1. Basic physical formulation shows the surface area and volume manually and in case 2. Advance equation shows the mechanical performance has been discussed, the metal aggregates which had outstandingly qualities to resist shear, tension and compression forces. In this paper coarse metal aggregates is 20 mm which used for making high performance concrete (H.P.C).

Keywords: applied mathematical study case, special metal aggregates, concrete technology, basic physical formulation, advance equation

Procedia PDF Downloads 366
1596 Dislocation Density-Based Modeling of the Grain Refinement in Surface Mechanical Attrition Treatment

Authors: Reza Miresmaeili, Asghar Heydari Astaraee, Fereshteh Dolati

Abstract:

In the present study, an analytical model based on dislocation density model was developed to simulate grain refinement in surface mechanical attrition treatment (SMAT). The correlation between SMAT time and development in plastic strain on one hand, and dislocation density evolution, on the other hand, was established to simulate the grain refinement in SMAT. A dislocation density-based constitutive material law was implemented using VUHARD subroutine. A random sequence of shots is taken into consideration for multiple impacts model using Python programming language by utilizing a random function. The simulation technique was to model each impact in a separate run and then transferring the results of each run as initial conditions for the next run (impact). The developed Finite Element (FE) model of multiple impacts describes the coverage evolution in SMAT. Simulations were run to coverage levels as high as 4500%. It is shown that the coverage implemented in the FE model is equal to the experimental coverage. It is depicted that numerical SMAT coverage parameter is adequately conforming to the well-known Avrami model. Comparison between numerical results and experimental measurements for residual stresses and depth of deformation layers confirms the performance of the established FE model for surface engineering evaluations in SMA treatment. X-ray diffraction (XRD) studies of grain refinement, including resultant grain size and dislocation density, were conducted to validate the established model. The full width at half-maximum in XRD profiles can be used to measure the grain size. Numerical results and experimental measurements of grain refinement illustrate good agreement and show the capability of established FE model to predict the gradient microstructure in SMA treatment.

Keywords: dislocation density, grain refinement, severe plastic deformation, simulation, surface mechanical attrition treatment

Procedia PDF Downloads 131
1595 Bound State Problems and Functional Differential Geometry

Authors: S. Srednyak

Abstract:

We study a class of functional partial differential equations(FPDEs). This class is suggested by Quantum Field Theory. We derive general properties of solutions to such equations. In particular, we demonstrate that they lead to systems of coupled integral equations with singular kernels. We show that solutions to such hierarchies can be sought among functions with regular singularities at a countable set of subvarieties of the physical space. We also develop a formal analogy of basic constructions of differential geometry on functional manifolds, as this is necessary for in depth study of FPDEs. We also consider the case of linear overdetermined systems of functional differential equations and show that it can be completely solved in terms of formal solutions of a functional equation that is a functional analogy of a system of determined algebraic equations. This development leads us to formally define the functional analogy of algebraic geometry, which we call functional algebraic geometry. We study basic properties of functional algebraic varieties. In particular, we investigate the case of a formally discrete set of solutions. We also define and study functional analogy of discriminants. In the case of fully determined systems such that the defining functionals have regular singularities, we demonstrate that formal solutions can be sought in the class of functions with regular singularities. This case provides a practical way to apply our results to physics problems.

Keywords: functional equations, quantum field theory, holomorphic functions, Yang Mills mass gap problem, quantum chaos

Procedia PDF Downloads 67
1594 Influence of Kneading Conditions on the Textural Properties of Alumina Catalysts Supports for Hydrotreating

Authors: Lucie Speyer, Vincent Lecocq, Séverine Humbert, Antoine Hugon

Abstract:

Mesoporous alumina is commonly used as a catalyst support for the hydrotreating of heavy petroleum cuts. The process of fabrication usually involves: the synthesis of the boehmite AlOOH precursor, a kneading-extrusion step, and a calcination in order to obtain the final alumina extrudates. Alumina is described as a complex porous medium, generally agglomerates constituted of aggregated nanocrystallites. Its porous texture directly influences the active phase deposition and mass transfer, and the catalytic properties. Then, it is easy to figure out that each step of the fabrication of the supports has a role on the building of their porous network, and has to be well understood to optimize the process. The synthesis of boehmite by precipitation of aluminum salts was extensively studied in the literature and the effect of various parameters, such as temperature or pH, are known to influence the size and shape of the crystallites and the specific surface area of the support. The calcination step, through the topotactic transition from boehmite to alumina, determines the final properties of the support and can tune the surface area, pore volume and pore diameters from those of boehmite. However, the kneading extrusion step has been subject to a very few studies. It generally consists in two steps: an acid, then a basic kneading, where the boehmite powder is introduced in a mixer and successively added with an acid and a base solution to form an extrudable paste. During the acid kneading, the induced positive charges on the hydroxyl surface groups of boehmite create an electrostatic repulsion which tends to separate the aggregates and even, following the conditions, the crystallites. The basic kneading, by reducing the surface charges, leads to a flocculation phenomenon and can control the reforming of the overall structure. The separation and reassembling of the particles constituting the boehmite paste have a quite obvious influence on the textural properties of the material. In this work, we are focused on the influence of the kneading step on the alumina catalysts supports. Starting from an industrial boehmite, extrudates are prepared using various kneading conditions. The samples are studied by nitrogen physisorption in order to analyze the evolution of the textural properties, and by synchrotron small-angle X-ray scattering (SAXS), a more original method which brings information about agglomeration and aggregation of the samples. The coupling of physisorption and SAXS enables a precise description of the samples, as same as an accurate monitoring of their evolution as a function of the kneading conditions. These ones are found to have a strong influence of the pore volume and pore size distribution of the supports. A mechanism of evolution of the texture during the kneading step is proposed and could be attractive in order to optimize the texture of the supports and then, their catalytic performances.

Keywords: alumina catalyst support, kneading, nitrogen physisorption, small-angle X-ray scattering

Procedia PDF Downloads 246
1593 Assessment of Groundwater Chemistry and Quality Characteristics in an Alluvial Aquifer and a Single Plane Fractured-Rock Aquifer in Bloemfontein, South Africa

Authors: Modreck Gomo

Abstract:

The evolution of groundwater chemistry and its quality is largely controlled by hydrogeochemical processes and their understanding is therefore important for groundwater quality assessments and protection of the water resources. A study was conducted in Bloemfontein town of South Africa to assess and compare the groundwater chemistry and quality characteristics in an alluvial aquifer and single-plane fractured-rock aquifers. 9 groundwater samples were collected from monitoring boreholes drilled into the two aquifer systems during a once-off sampling exercise. Samples were collected through low-flow purging technique and analysed for major ions and trace elements. In order to describe the hydrochemical facies and identify dominant hydrogeochemical processes, the groundwater chemistry data are interpreted using stiff diagrams and principal component analysis (PCA), as complimentary tools. The fitness of the groundwater quality for domestic and irrigation uses is also assessed. Results show that the alluvial aquifer is characterised by a Na-HCO3 hydrochemical facie while fractured-rock aquifer has a Ca-HCO3 facie. The groundwater in both aquifers originally evolved from the dissolution of calcite rocks that are common on land surface environments. However the groundwater in the alluvial aquifer further goes through another evolution as driven by cation exchange process in which Na in the sediments exchanges with Ca2+ in the Ca-HCO3 hydrochemical type to result in the Na-HCO3 hydrochemical type. Despite the difference in the hydrogeochemical processes between the alluvial aquifer and single-plane fractured-rock aquifer, this did not influence the groundwater quality. The groundwater in the two aquifers is very hard as influenced by the elevated magnesium and calcium ions that evolve from dissolution of carbonate minerals which typically occurs in surface environments. Based on total dissolved levels (600-900 mg/L), groundwater quality of the two aquifer systems is classified to be of fair quality. The negative potential impacts of the groundwater quality for domestic uses are highlighted.

Keywords: alluvial aquifer, fractured-rock aquifer, groundwater quality, hydrogeochemical processes

Procedia PDF Downloads 193
1592 Fuzzy Time Series Forecasting Based on Fuzzy Logical Relationships, PSO Technique, and Automatic Clustering Algorithm

Authors: A. K. M. Kamrul Islam, Abdelhamid Bouchachia, Suang Cang, Hongnian Yu

Abstract:

Forecasting model has a great impact in terms of prediction and continues to do so into the future. Although many forecasting models have been studied in recent years, most researchers focus on different forecasting methods based on fuzzy time series to solve forecasting problems. The forecasted models accuracy fully depends on the two terms that are the length of the interval in the universe of discourse and the content of the forecast rules. Moreover, a hybrid forecasting method can be an effective and efficient way to improve forecasts rather than an individual forecasting model. There are different hybrids forecasting models which combined fuzzy time series with evolutionary algorithms, but the performances are not quite satisfactory. In this paper, we proposed a hybrid forecasting model which deals with the first order as well as high order fuzzy time series and particle swarm optimization to improve the forecasted accuracy. The proposed method used the historical enrollments of the University of Alabama as dataset in the forecasting process. Firstly, we considered an automatic clustering algorithm to calculate the appropriate interval for the historical enrollments. Then particle swarm optimization and fuzzy time series are combined that shows better forecasting accuracy than other existing forecasting models.

Keywords: fuzzy time series (fts), particle swarm optimization, clustering algorithm, hybrid forecasting model

Procedia PDF Downloads 244
1591 An Ontological Approach to Existentialist Theatre and Theatre of the Absurd in the Works of Jean-Paul Sartre and Samuel Beckett

Authors: Gülten Silindir Keretli

Abstract:

The aim of this study is to analyse the works of playwrights within the framework of existential philosophy. It is to observe the ontological existence in the plays of No Exit and Endgame. Literary works will be discussed separately in each section of this study. The despair of post-war generation of Europe problematized the ‘human condition’ in every field of literature which is the very product of social upheaval. With this concern in his mind, Sartre’s creative works portrayed man as a lonely being, burdened with terrifying freedom to choose and create his own meaning in an apparently meaningless world. The traces of the existential thought are to be found throughout the history of philosophy and literature. On the other hand, the theatre of the absurd is a form of drama showing the absurdity of the human condition and it is heavily influenced by the existential philosophy. Beckett is the most influential playwright of the theatre of the absurd. The themes and thoughts in his plays share many tenets of the existential philosophy. The existential philosophy posits the meaninglessness of existence and it regards man as being thrown into the universe and into desolate isolation. To overcome loneliness and isolation, the human ego needs recognition from the other people. Sartre calls this need of recognition as the need for ‘the Look’ (Le regard) from the Other. In this paper, existentialist philosophy and existentialist angst will be elaborated and then the works of existentialist theatre and theatre of absurd will be discussed within the framework of existential philosophy.

Keywords: consciousness, existentialism, the notion of the absurd, the other

Procedia PDF Downloads 153
1590 Black-Hole Dimension: A Distinct Methodology of Understanding Time, Space and Data in Architecture

Authors: Alp Arda

Abstract:

Inspired by Nolan's ‘Interstellar’, this paper delves into speculative architecture, asking, ‘What if an architect could traverse time to study a city?’ It unveils the ‘Black-Hole Dimension,’ a groundbreaking concept that redefines urban identities beyond traditional boundaries. Moving past linear time narratives, this approach draws from the gravitational dynamics of black holes to enrich our understanding of urban and architectural progress. By envisioning cities and structures as influenced by black hole-like forces, it enables an in-depth examination of their evolution through time and space. The Black-Hole Dimension promotes a temporal exploration of architecture, treating spaces as narratives of their current state interwoven with historical layers. It advocates for viewing architectural development as a continuous, interconnected journey molded by cultural, economic, and technological shifts. This approach not only deepens our understanding of urban evolution but also empowers architects and urban planners to create designs that are both adaptable and resilient. Echoing themes from popular culture and science fiction, this methodology integrates the captivating dynamics of time and space into architectural analysis, challenging established design conventions. The Black-Hole Dimension champions a philosophy that welcomes unpredictability and complexity, thereby fostering innovation in design. In essence, the Black-Hole Dimension revolutionizes architectural thought by emphasizing space-time as a fundamental dimension. It reimagines our built environments as vibrant, evolving entities shaped by the relentless forces of time, space, and data. This groundbreaking approach heralds a future in architecture where the complexity of reality is acknowledged and embraced, leading to the creation of spaces that are both responsive to their temporal context and resilient against the unfolding tapestry of time.

Keywords: black-hole, timeline, urbanism, space and time, speculative architecture

Procedia PDF Downloads 63
1589 Governance and Public Policy: The Perception of Efficiency and Equility in Brazil and South Africa

Authors: Paulino V. Tavares, Ana L. Romao

Abstract:

Public governance represents an articulated arrangement, dynamic and interactive, present in the exercise of authority aimed at strengthening the decision-making procedure in public administration with transparency, accountability, responsiveness and capable of to emerge control and social empowerment, to pursue and achieve the objectives efficiently and with the effectiveness desired by the collective, respecting laws and providing social, institutional and economic equility in society. With this, using a multidimensional approach with the application of three questionnaires to a universe of twenty Counselors of the Courts of Auditors (Brazil), twenty professionals of public administration (Brazil), twenty Government/Provincial Counselors (South Africa), and twenty South African professionals of public administration, the present work aims to capture what is the perception about the efficiency and equility of public policies in Brazil and South Africa. With this, up until now, 30 responses have been obtained, and the results indicate that, in Brazil, 65% affirm due to the inefficiency of public policies, 70% point out that they do not believe in the equility of these same policies. In South Africa, the results indicate that 45% believe in government efficiency, and, with regard to the equility of public policies, 65% do not believe. In Brazil, the research reveals at least three reasons for this result, that is, lack of planning, lack of clear objectives of public policies, and lack of information on the part of society, while in South Africa, so far, research has not identified a specific reason for this result.

Keywords: efficiency, equility, governance, public policy

Procedia PDF Downloads 116
1588 Denoising Convolutional Neural Network Assisted Electrocardiogram Signal Watermarking for Secure Transmission in E-Healthcare Applications

Authors: Jyoti Rani, Ashima Anand, Shivendra Shivani

Abstract:

In recent years, physiological signals obtained in telemedicine have been stored independently from patient information. In addition, people have increasingly turned to mobile devices for information on health-related topics. Major authentication and security issues may arise from this storing, degrading the reliability of diagnostics. This study introduces an approach to reversible watermarking, which ensures security by utilizing the electrocardiogram (ECG) signal as a carrier for embedding patient information. In the proposed work, Pan-Tompkins++ is employed to convert the 1D ECG signal into a 2D signal. The frequency subbands of a signal are extracted using RDWT(Redundant discrete wavelet transform), and then one of the subbands is subjected to MSVD (Multiresolution singular valued decomposition for masking. Finally, the encrypted watermark is embedded within the signal. The experimental results show that the watermarked signal obtained is indistinguishable from the original signals, ensuring the preservation of all diagnostic information. In addition, the DnCNN (Denoising convolutional neural network) concept is used to denoise the retrieved watermark for improved accuracy. The proposed ECG signal-based watermarking method is supported by experimental results and evaluations of its effectiveness. The results of the robustness tests demonstrate that the watermark is susceptible to the most prevalent watermarking attacks.

Keywords: ECG, VMD, watermarking, PanTompkins++, RDWT, DnCNN, MSVD, chaotic encryption, attacks

Procedia PDF Downloads 93
1587 The Conditionality of Financial Risk: A Comparative Analysis of High-Tech and Utility Companies Listed on the Shenzhen Stock Exchange (SSE)

Authors: Joseph Paul Chunga

Abstract:

The investment universe is awash with a myriad of financial choices that investors have to opt for, which principally culminates into a duality between aggressive or conservative approaches. Howbeit, it is pertinent to emphasize that the investment vehicles with an aggressive approach tend to take on more risk than the latter group in an effort to generate higher future returns for their respective investors. This study examines the conditionality effect that such partiality in financing has on the High-Tech and Public Utility companies listed on the Shenzhen Stock Exchange (SSE). Specifically, it examines the significance of the relationship between capitalization ratios of Total Debt Ratio (TDR), Degree of Financial Leverage (DFL) and profitability ratios of Earnings per Share (EPS) and Returns on Equity (ROE) on the Financial Risk of the two industries. We employ a modified version of the Panel Regression Model used by Rahman (2017) to estimate the relationship. The study finds that there is a significant positive relationship between the capitalization ratios on the financial risk of Public Utility companies more than High-Tech companies and a substantial negative relationship between the profitability ratios and the financial risk of the former than the latter companies. This then spells an important insight for prospective investors with regards to the volatility of earnings of such companies.

Keywords: financial leverage, debt financing, conservative firms, aggressive firms

Procedia PDF Downloads 172
1586 Time, Uncertainty, and Technological Innovation

Authors: Xavier Everaert

Abstract:

Ever since the publication of “The Problem of Social” cost, Coasean insights on externalities, transaction costs, and the reciprocal nature of harms, have been widely debated. What has been largely neglected however, is the role of technological innovation in the mitigation of negative externalities or transaction costs. Incorporating future uncertainty about negligence standards or expected restitution costs and the profit opportunities these uncertainties reveal to entrepreneurs, allow us to frame problems regarding social costs within the reality of rapid technological evolution.

Keywords: environmental law and economics, entrepreneurship, commons, pollution, wildlife

Procedia PDF Downloads 414
1585 Quality is the Matter of All

Authors: Mohamed Hamza, Alex Ohoussou

Abstract:

At JAWDA, our primary focus is on ensuring the satisfaction of our clients worldwide. We are committed to delivering new features on our SaaS platform as quickly as possible while maintaining high-quality standards. In this paper, we highlight two key aspects of testing that represent an evolution of current methods and a potential trend for the future, which have enabled us to uphold our commitment effectively. These aspects are: "One Sandbox per Pull Request" (dynamic test environments instead of static ones) and "QA for All.".

Keywords: QA for all, dynamic sandboxes, QAOPS, CICD, continuous testing, all testers, QA matters for all, 1 sandbox per PR, utilization rate, coverage rate

Procedia PDF Downloads 19