Search results for: compressed stabilized earth blocks
1765 A Model of the Universe without Expansion of Space
Authors: Jia-Chao Wang
Abstract:
A model of the universe without invoking space expansion is proposed to explain the observed redshift-distance relation and the cosmic microwave background radiation (CMB). The main hypothesized feature of the model is that photons traveling in space interact with the CMB photon gas. This interaction causes the photons to gradually lose energy through dissipation and, therefore, experience redshift. The interaction also causes some of the photons to be scattered off their track toward an observer and, therefore, results in beam intensity attenuation. As observed, the CMB exists everywhere in space and its photon density is relatively high (about 410 per cm³). The small average energy of the CMB photons (about 6.3×10⁻⁴ eV) can reduce the energies of traveling photons gradually and will not alter their momenta drastically as in, for example, Compton scattering, to totally blur the images of distant objects. An object moving through a thermalized photon gas, such as the CMB, experiences a drag. The cause is that the object sees a blue shifted photon gas along the direction of motion and a redshifted one in the opposite direction. An example of this effect can be the observed CMB dipole: The earth travels at about 368 km/s (600 km/s) relative to the CMB. In the all-sky map from the COBE satellite, radiation in the Earth's direction of motion appears 0.35 mK hotter than the average temperature, 2.725 K, while radiation on the opposite side of the sky is 0.35 mK colder. The pressure of a thermalized photon gas is given by Pγ = Eγ/3 = αT⁴/3, where Eγ is the energy density of the photon gas and α is the Stefan-Boltzmann constant. The observed CMB dipole, therefore, implies a pressure difference between the two sides of the earth and results in a CMB drag on the earth. By plugging in suitable estimates of quantities involved, such as the cross section of the earth and the temperatures on the two sides, this drag can be estimated to be tiny. But for a photon traveling at the speed of light, 300,000 km/s, the drag can be significant. In the present model, for the dissipation part, it is assumed that a photon traveling from a distant object toward an observer has an effective interaction cross section pushing against the pressure of the CMB photon gas. For the attenuation part, the coefficient of the typical attenuation equation is used as a parameter. The values of these two parameters are determined by fitting the 748 µ vs. z data points compiled from 643 supernova and 105 γ-ray burst observations with z values up to 8.1. The fit is as good as that obtained from the lambda cold dark matter (ΛCDM) model using online cosmological calculators and Planck 2015 results. The model can be used to interpret Hubble's constant, Olbers' paradox, the origin and blackbody nature of the CMB radiation, the broadening of supernova light curves, and the size of the observable universe.Keywords: CMB as the lowest energy state, model of the universe, origin of CMB in a static universe, photon-CMB photon gas interaction
Procedia PDF Downloads 1351764 Effects of Sn and Al on Phase Stability and Mechanical Properties of Metastable Beta Ti Alloys
Authors: Yonosuke Murayama
Abstract:
We have developed and studied a metastable beta Ti alloy, which shows super-elasticity and low Young’s modulus according to the phase stability of its beta phase. The super-elasticity and low Young’s modulus are required in a wide range of applications in various industrial fields. For example, the metallic implant with low Young’s modulus and non-toxicity is desirable because the large difference of Young’s modulus between the human bone and the implant material may cause a stress-shielding phenomenon. We have investigated the role of Sn and Al in metastable beta Ti-Cr-Sn, Ti-Cr-Al, Ti-V-Sn, and Ti-V-Al alloys. The metastable beta Ti-Cr-Sn, Ti-Cr-Al, Ti-V-Sn, and Ti-V-Al alloys form during quenching from the beta field at high temperature. While Cr and V act as beta stabilizers, Sn and Al are considered as elements to suppress the athermal omega phase produced during quenching. The athermal omega phase degrades the properties of super-elasticity and Young’s modulus. Although Al and Sn as single elements are considered as an alpha stabilizer and neutral, respectively, Sn and Al acted also as beta stabilizers when added simultaneously with beta stabilized element of Cr or V in this experiment. The quenched microstructure of Ti-Cr-Sn, Ti-Cr-Al, Ti-V-Sn, and Ti-V-Al alloys shifts from martensitic structure to beta single-phase structure with increasing Cr or V. The Young’s modulus of Ti-Cr-Sn, Ti-Cr-Al, Ti-V-Sn, and Ti-V-Al alloys decreased and then increased with increasing Cr or V, each showing its own minimum value of Young's modulus respectively. The composition of the alloy with the minimum Young’s modulus is a near border composition where the quenched microstructure shifts from martensite to beta. The border composition of Ti-Cr-Sn and Ti-V-Sn alloys required only less amount of each beta stabilizer, Cr or V, than Ti-Cr-Al and Ti-V-Al alloys. This indicates that the effect of Sn as a beta stabilizer is stronger than Al. Sn and Al influenced the competitive relation between stress-induced martensitic transformation and slip deformation. Thus, super-elastic properties of metastable beta Ti-Cr-Sn, Ti-Cr-Al, Ti-V-Sn, and Ti-V-Al alloys varied depending on the alloyed element, Sn or Al.Keywords: metastable beta Ti alloy, super-elasticity, low Young’s modulus, stress-induced martensitic transformation, beta stabilized element
Procedia PDF Downloads 1461763 Geochemistry and Petrogenesis of Anorogenic Acid Plutonic Rocks of Khanak and Devsar of Southwestern Haryana
Authors: Naresh Kumar, Radhika Sharma, A. K. Singh
Abstract:
Acid plutonic rocks from the Khanak and Devsar areas of southwestern Haryana were investigated to understand their geochemical and petrogenetic characteristics and tectonic environments. Three dominant rock types (grey, grayish green and pink granites) are the principal geochemical features of Khanak and Devsar areas which reflect the dependencies of their composition on varied geological environment during the anorogenic magmatism. These rocks are enriched in SiO₂, Na₂O+K₂O, Fe/Mg, Rb, Zr, Y, Th, U, REE (Rare Earth Elements) enriched and depleted in MgO, CaO, Sr, P, Ti, Ni, Cr, V and Eu and exhibit a clear affinity to the within-plate granites that were emplaced in an extensional tectonic environment. Chondrite-normalized REE patterns show enriched LREE (Light Rare Earth Elements), moderate to strong negative Eu anomalies and flat heavy REE and grey and grayish green is different from pink granite which is enriched by Rb, Ga, Nb, Th, U, Y and HREE (Heavy Rare Earth Elements) concentrations. The composition of parental magma of both areas corresponds to mafic source contaminated with crustal materials. Petrogenetic modelling suggest that the acid plutonic rocks might have been generated from a basaltic source by partial melting (15-25%) leaving a residue with 35% plagioclase, 25% alkali feldspar, 25% quartz, 7% orthopyroxene, 5% biotite and 3% hornblende. Granites from both areas might be formed from different sources with different degree of melting for grey, grayish green and pink granites.Keywords: A-type granite, anorogenic, Malani igneous suite, Khanak and Devsar
Procedia PDF Downloads 1781762 Error Detection and Correction for Onboard Satellite Computers Using Hamming Code
Authors: Rafsan Al Mamun, Md. Motaharul Islam, Rabana Tajrin, Nabiha Noor, Shafinaz Qader
Abstract:
In an attempt to enrich the lives of billions of people by providing proper information, security and a way of communicating with others, the need for efficient and improved satellites is constantly growing. Thus, there is an increasing demand for better error detection and correction (EDAC) schemes, which are capable of protecting the data onboard the satellites. The paper is aimed towards detecting and correcting such errors using a special algorithm called the Hamming Code, which uses the concept of parity and parity bits to prevent single-bit errors onboard a satellite in Low Earth Orbit. This paper focuses on the study of Low Earth Orbit satellites and the process of generating the Hamming Code matrix to be used for EDAC using computer programs. The most effective version of Hamming Code generated was the Hamming (16, 11, 4) version using MATLAB, and the paper compares this particular scheme with other EDAC mechanisms, including other versions of Hamming Codes and Cyclic Redundancy Check (CRC), and the limitations of this scheme. This particular version of the Hamming Code guarantees single-bit error corrections as well as double-bit error detections. Furthermore, this version of Hamming Code has proved to be fast with a checking time of 5.669 nanoseconds, that has a relatively higher code rate and lower bit overhead compared to the other versions and can detect a greater percentage of errors per length of code than other EDAC schemes with similar capabilities. In conclusion, with the proper implementation of the system, it is quite possible to ensure a relatively uncorrupted satellite storage system.Keywords: bit-flips, Hamming code, low earth orbit, parity bits, satellite, single error upset
Procedia PDF Downloads 1301761 Evaluation of Hybrid Viscoelastic Damper for Passive Energy Dissipation
Authors: S. S. Ghodsi, M. H. Mehrabi, Zainah Ibrahim, Meldi Suhatril
Abstract:
This research examines the performance of a hybrid passive control device for enhancing the seismic response of steel frame structures. The device design comprises a damper which employs a viscoelastic material to control both shear and axial strain. In the design, energy is dissipated through the shear strain of a two-layer system of viscoelastic pads which are located between steel plates. In addition, viscoelastic blocks have been included on either side of the main shear damper which obtains compressive strains in the viscoelastic blocks. These dampers not only dissipate energy but also increase the stiffness of the steel frame structure, and the degree to which they increase the stiffness may be controlled by the size and shape. In this research, the cyclical behavior of the damper was examined both experimentally and numerically with finite element modeling. Cyclic loading results of the finite element modeling reveal fundamental characteristics of this hybrid viscoelastic damper. The results indicate that incorporating a damper of the design can significantly improve the seismic performance of steel frame structures.Keywords: cyclic loading, energy dissipation, hybrid damper, passive control system, viscoelastic damper
Procedia PDF Downloads 2091760 Numerical Investigation of Multiphase Flow in Pipelines
Authors: Gozel Judakova, Markus Bause
Abstract:
We present and analyze reliable numerical techniques for simulating complex flow and transport phenomena related to natural gas transportation in pipelines. Such kind of problems are of high interest in the field of petroleum and environmental engineering. Modeling and understanding natural gas flow and transformation processes during transportation is important for the sake of physical realism and the design and operation of pipeline systems. In our approach a two fluid flow model based on a system of coupled hyperbolic conservation laws is considered for describing natural gas flow undergoing hydratization. The accurate numerical approximation of two-phase gas flow remains subject of strong interest in the scientific community. Such hyperbolic problems are characterized by solutions with steep gradients or discontinuities, and their approximation by standard finite element techniques typically gives rise to spurious oscillations and numerical artefacts. Recently, stabilized and discontinuous Galerkin finite element techniques have attracted researchers’ interest. They are highly adapted to the hyperbolic nature of our two-phase flow model. In the presentation a streamline upwind Petrov-Galerkin approach and a discontinuous Galerkin finite element method for the numerical approximation of our flow model of two coupled systems of Euler equations are presented. Then the efficiency and reliability of stabilized continuous and discontinous finite element methods for the approximation is carefully analyzed and the potential of the either classes of numerical schemes is investigated. In particular, standard benchmark problems of two-phase flow like the shock tube problem are used for the comparative numerical study.Keywords: discontinuous Galerkin method, Euler system, inviscid two-fluid model, streamline upwind Petrov-Galerkin method, twophase flow
Procedia PDF Downloads 3301759 Geothermal Energy Evaluation of Lower Benue Trough Using Spectral Analysis of Aeromagnetic Data
Authors: Stella C. Okenu, Stephen O. Adikwu, Martins E. Okoro
Abstract:
The geothermal energy resource potential of the Lower Benue Trough (LBT) in Nigeria was evaluated in this study using spectral analysis of high-resolution aeromagnetic (HRAM) data. The reduced to the equator aeromagnetic data was divided into sixteen (16) overlapping blocks, and each of the blocks was analyzed to obtain the radial averaged power spectrum which enabled the computation of the top and centroid depths to magnetic sources. The values were then used to assess the Curie Point Depth (CPD), geothermal gradients, and heat flow variations in the study area. Results showed that CPD varies from 7.03 to 18.23 km, with an average of 12.26 km; geothermal gradient values vary between 31.82 and 82.50°C/km, with an average of 51.21°C/km, while heat flow variations range from 79.54 to 206.26 mW/m², with an average of 128.02 mW/m². Shallow CPD zones that run from the eastern through the western and southwestern parts of the study area correspond to zones of high geothermal gradient values and high subsurface heat flow distributions. These areas signify zones associated with anomalous subsurface thermal conditions and are therefore recommended for detailed geothermal energy exploration studies.Keywords: geothermal energy, curie-point depth, geothermal gradient, heat flow, aeromagnetic data, LBT
Procedia PDF Downloads 781758 Perturbative Analysis on a Lunar Free Return Trajectory
Authors: Emre Ünal, Hasan Başaran
Abstract:
In this study, starting with a predetermined Lunar free-return trajectory, an analysis of major near-Earth perturbations is carried out. Referencing to historical Apollo-13 flight, changes in the mission’s resultant perimoon and perigee altitudes with each perturbative effect are evaluated. The perturbations that were considered are Earth oblateness effects, up to the 6th order, atmospheric drag, third body perturbations consisting of solar and planetary effects and solar radiation pressure effects. It is found that for a Moon mission, most of the main perturbative effects spoil the trajectory significantly while some came out to be negligible. It is seen that for apparent future request of constructing low cost, reliable and safe trajectories to the Moon, most of the orbital perturbations are crucial.Keywords: Apollo-13 trajectory, atmospheric drag, lunar trajectories, oblateness effect, perturbative effects, solar radiation pressure, third body perturbations
Procedia PDF Downloads 1481757 Conventional Synthesis and Characterization of Zirconium Molybdate, Nd2Zr3(MoO4)9
Authors: G. Çelik Gül, F. Kurtuluş
Abstract:
Rare earths containing complex metal oxides have drawn much attention due to physical, chemical and optical properties which make them feasible in so many areas such as non-linear optical materials and ion exchanger. We have researched a systematic study to obtain rare earth containing zirconium molybdate compound, characterization, investigation of crystal system and calculation of unit cell parameters. After a successful synthesis of Nd2Zr3(MoO4)9 which is a member of rare earth metal containing complex oxides family, X-ray diffraction (XRD), High Score Plus/Rietveld refinement analysis, and Fourier Transform Infrared Spectroscopy (FTIR) were completed to determine the crystal structure. Morphological properties and elemental composition were determined by scanning electron microscopy (SEM) and energy dispersive X-ray (EDX) analysis. Thermal properties were observed via Thermogravimetric-differential thermal analysis (TG/DTA).Keywords: Nd₂Zr₃(MoO₄)₉, powder x-ray diffraction, solid state synthesis, zirconium molybdates
Procedia PDF Downloads 3981756 Methods Employed to Mitigate Wind Damage on Ancient Egyptian Architecture
Authors: Hossam Mohamed Abdelfattah Helal Hegazi
Abstract:
Winds and storms are considered crucial weathering factors, representing primary causes of destruction and erosion for all materials on the Earth's surface. This naturally includes historical structures, with the impact of winds and storms intensifying their deterioration, particularly when carrying high-hardness sand particles during their passage across the ground. Ancient Egyptians utilized various methods to prevent wind damage to their ancient architecture throughout the ancient Egyptian periods . One of the techniques employed by ancient Egyptians was the use of clay or compacted earth as a filling material between opposing walls made of stone, bricks, or mud bricks. The walls made of reeds or woven tree branches were covered with clay to prevent the infiltration of winds and rain, enhancing structural integrity, this method was commonly used in hollow layers . Additionally, Egyptian engineers innovated a type of adobe brick with uniformly leveled sides, manufactured from dried clay. They utilized stone barriers, constructed wind traps, and planted trees in rows parallel to the prevailing wind direction. Moreover, they employed receptacles to drain rainwater resulting from wind-loaded rain and used mortar to fill gaps in roofs and structures. Furthermore, proactive measures such as the removal of sand from around historical and archaeological buildings were taken to prevent adverse effectsKeywords: winds, storms, weathering, destruction, erosion, materials, Earth's surface, historical structures, impact
Procedia PDF Downloads 631755 Experimental Study of Upsetting and Die Forging with Controlled Impact
Authors: T. Penchev, D. Karastoyanov
Abstract:
The results from experimental research of deformation by upsetting and die forging of lead specimens wit controlled impact are presented. Laboratory setup for conducting the investigations, which uses cold rocket engine operated with compressed air, is described. The results show that when using controlled impact is achieving greater plastic deformation and consumes less impact energy than at ordinary impact deformation process.Keywords: rocket engine, forging hammer, sticking impact, plastic deformation
Procedia PDF Downloads 3721754 The Need for a Tool to Support Users of E-Science Infrastructures in a Virtual Laboratory Environment
Authors: Hashim Chunpir
Abstract:
Support processes play an important role to facilitate researchers (users) to accomplish their research activities with the help of cyber-infrastructure(s). However, the current user-support process in cyber-infrastructure needs a feasible tool to support users. This tool must enable the users of a cyber-infrastructure to communicate efficiently with the staffs of a cyber-infrastructure in order to get technical and scientific assistance, whilst saving resources at the same time. This research paper narrates the real story of employing various forms of tools to support the user and staff communication. In addition, this paper projects the lessons learned from an exploration of the help-desk tools in the current state of user support process in Earth System Grid Federation (ESGF) from support staffs’ perspective. ESGF is a climate cyber-infrastructure that facilitates Earth System Modeling (ESM) and is taken as a case study in this paper. Finally, this study proposes a need for a tool, a framework or a platform that not only improves the user support process to address support servicing needs of end-users of e-Science infrastructures but also eases the life of staffs in providing assistance to the users. With the help of such a tool; the collaboration between users and the staffs of cyber-infrastructures is made easier. Consequently, the research activities of the users of e-Science infrastructure will thrive as the scientific and technical support will be available to users. Finally, this results into painless and productive e-Research.Keywords: e-Science User Services, e-Research in Earth Sciences, Information Technology Services Management (ITSM), user support process, service desk, management of support activities, help desk tools, application of social media
Procedia PDF Downloads 4731753 Uncertainty of the Brazilian Earth System Model for Solar Radiation
Authors: Elison Eduardo Jardim Bierhals, Claudineia Brazil, Deivid Pires, Rafael Haag, Elton Gimenez Rossini
Abstract:
This study evaluated the uncertainties involved in the solar radiation projections generated by the Brazilian Earth System Model (BESM) of the Weather and Climate Prediction Center (CPTEC) belonging to Coupled Model Intercomparison Phase 5 (CMIP5), with the aim of identifying efficiency in the projections for solar radiation of said model and in this way establish the viability of its use. Two different scenarios elaborated by Intergovernmental Panel on Climate Change (IPCC) were evaluated: RCP 4.5 (with more optimistic contour conditions) and 8.5 (with more pessimistic initial conditions). The method used to verify the accuracy of the present model was the Nash coefficient and the Statistical bias, as it better represents these atmospheric patterns. The BESM showed a tendency to overestimate the data of solar radiation projections in most regions of the state of Rio Grande do Sul and through the validation methods adopted by this study, BESM did not present a satisfactory accuracy.Keywords: climate changes, projections, solar radiation, uncertainty
Procedia PDF Downloads 2511752 Efficacy of Coconut Shell Pyrolytic Oil Distillate in Protecting Wood Against Bio-Deterioration
Authors: K. S. Shiny, R. Sundararaj
Abstract:
Coconut trees (Cocos nucifera L.) are grown in many parts of India and world because of its multiple utilities. During pyrolysis, coconut shells yield oil, which is a dark thick liquid. Upon simple distillation it produces a more or less colourless liquid, termed coconut shell pyrolytic oil distillate (CSPOD). This manuscript reports and discusses the use of coconut shell pyrolytic oil distillate as a potential wood protectant against bio-deterioration. Since botanical products as ecofriendly wood protectant is being tested worldwide, the utilization of CPSOD as wood protectant is of great importance. The efficacy of CSPOD as wood protectant was evaluated as per Bureau of Indian Standards (BIS) in terms of its antifungal, antiborer, and termiticidal activities. Specimens of Rubber wood (Hevea brasiliensis) in six replicate each for two treatment methods namely spraying and dipping (48hrs) were employed. CSPOD was found to impart total protection against termites for six months compared to control under field conditions. For assessing the efficacy of CSPOD against fungi, the treated blocks were subjected to the attack of two white rot fungi Tyromyces versicolor (L.) Fr. and Polyporus sanguineus (L.) G. Mey and two brown rot fungi, Polyporus meliae (Undrew.) Murrill. and Oligoporus placenta (Fr.) Gilb. & Ryvarden. Results indicated that treatment with CSPOD significantly protected wood from the damage caused by the decay fungi. Efficacy of CSPOD against wood borer Lyctus africanus Lesne was carried out using six pairs of male and female beetles and it gave promising results in protecting the treated wood blocks when compared to control blocks. As far as the treatment methods were concerned, dip treatment was found to be more effective when compared to spraying. The results of the present investigation indicated that CSPOD is a promising botanical compound which has the potential to replace synthetic wood protectants. As coconut shell, pyrolytic oil is a waste byproduct of coconut shell charcoal industry, its utilization as a wood preservative will expand the economic returns from such industries.Keywords: coconut shell pyrolytic oil distillate, eco-friendly wood protection, termites, wood borers, wood decay fungi
Procedia PDF Downloads 3721751 Study on Effectiveness of Strategies to Re-Establish Landscape Connectivity of Expressways with Reference to Southern Expressway Sri Lanka
Authors: N. G. I. Aroshana, S. Edirisooriya
Abstract:
Construction of highway is the most emerging development tendency in Sri Lanka. With these development activities, there are a lot of environmental and social issues started. Landscape fragmentation is one of the main issues that highly effect to the environment by the construction of expressways. Sri Lankan expressway system getting effort to treat fragmented landscape by using highway crossing structures. This paper designates, a highway post construction landscape study on the effectiveness of the landscape connectivity structures to restore connectivity. Geographic Information Systems (GIS), least cost path tool has been used in the selected two plots; 25km alone the expressway to identify animal crossing paths. Animal accident data use as measure for determining the most contributed plot for landscape connectivity. Number of patches, Mean patch size, Class area use as a parameter to determine the most effective land use class to reestablish the landscape connectivity. The findings of the research express scrub, grass and marsh were the most positively affected land use typologies for increase the landscape connectivity. It represents the growth increased by 8% within the 12 years of time. From the least cost analysis within the plot one, 28.5% of total animal crossing structures are within the high resistance land use classes. Southern expressway used reinforced compressed earth technologies for construction. It has been controlled the growth of the climax community. According to all findings, it could assume that involvement of the landscape crossing structures contributes to re-establish connectivity, but it is not enough to restore the majority of disturbance performed by the expressway. Connectivity measures used within the study can use as a tool for re-evaluate future involvement of highway crossing structures. Proper placement of the highway crossing structures leads to increase the rate of connectivity. The study recommends that monitoring the all stages (preconstruction, construction and post construction) of the project and preliminary design, and the involvement of the research applied connectivity assessment strategies helps to overcome the complication regarding the re-establishment of landscape connectivity using the highway crossing structures that facilitate the growth of flora and fauna.Keywords: landscape fragmentation, least cost path, land use analysis, landscape connectivity structures
Procedia PDF Downloads 1501750 A Review on the Problems of Constructing a Theory of Quantum Gravity
Authors: Amber Jamal, Imran Siddiqui, Syed Tanveer Iqbal
Abstract:
This review is aimed to shed some light on problems constructing a theory of spacetime and geometry in terms of all quantum degrees of freedom called ‘Quantum Gravity’. Such a theory, which is effective at all scales of distances and energies, describes the enigma of the beginning of the Universe, its possible end, and reducing to general relativity at large distances but in a semi-classical approximation. Furthermore, the theory of quantum gravity also describes the Universe as a whole and provides a description of most fundamental questions that have puzzled scientists for decades, such as: what is space, what is time, and what is the fundamental structure of the Universe, is the spacetime discrete, if it is, where does the continuum of spacetime come from at low energies and macroscopic scales and where does it emerge from its fundamentally discrete building blocks? Quantum Field Theory (QFT) is a framework which describes the microscopic properties and dynamics of the basic building blocks of any condensed matter system. In QFT, atoms are quanta of continuous fields. At smaller scales or higher energies, the continuum description of spacetime fails. Therefore, a new description is required in terms of microscopic constituents (atoms or molecules). The objective of this scientific endeavor is to discuss the above-mentioned problems rigorously and to discuss possible way-out of the problems.Keywords: QFT, quantum degrees of freedom, quantum gravity, semi-classical approximation
Procedia PDF Downloads 1211749 Reliability-Based Design of an Earth Slope Taking into Account Unsaturated Soil Properties
Authors: A. T. Siacara, A. T. Beck, M. M. Futai
Abstract:
This paper shows how accurately and efficiently reliability analyses of geotechnical installations can be performed by directly coupling geotechnical software with a reliability solver. An earth slope is used as the study object. The limit equilibrium method of Morgenstern-Price is used to calculate factors of safety and find the critical slip surface. The deterministic software package Seep/W and Slope/W is coupled with the StRAnD reliability software. Reliability indexes of critical probabilistic surfaces are evaluated by the first-order reliability methods (FORM). By means of sensitivity analysis, the effective cohesion (c') is found to be the most relevant uncertain geotechnical parameter for slope equilibrium. The slope was tested using different geometries, taking into account unsaturated soil properties. Finally, a critical slip surface, identified in terms of minimum factor of safety, is shown here not to be the critical surface in terms of reliability index.Keywords: slope, unsaturated, reliability, safety, seepage
Procedia PDF Downloads 1491748 The Effect of Curing Temperature and Rice Husk Ash Addition on the Behaviour of Sulfate-Rich Clay after Lime Stabilization
Authors: E. Bittar, A. Quiñonez, F. Mencia, E. Aguero, M. Delgado, V. Arriola, R. López
Abstract:
In the western region of Paraguay, the poor condition of the roads has negatively affected the development of this zone, where the absence of petrous material has led engineers to opt for the stabilization of soils with lime or cement as the main structure for bases and subbases of these roads. In several areas of this region, high sulfate contents have been found both in groundwater and in soils, which, when reacted with lime or cement, generate a new problem instead of solving it. On the other hand, the use of industrial waste as granulated slag and fly ash proved to be a sustainable practice widely used in the manufacture of cement, and now also, in the stabilization of soils worldwide. Works related to soils containing sulfates stabilized either with granulated slag or fly ash and lime shown a good performance in their mechanical behaviour. This research seeks to evaluate the mechanical behaviour of soils with high contents of sulfates stabilized with lime by curing them both, at the normalized temperature (23 ± 2 °C) and at 40 ± 2 °C. Moreover, it attempts to asses if the addition of rice husk ash has a positive influence on the new geomaterial. The 40 ± 2 °C curing temperature was selected trying to simulate the average local temperature in summer and part of spring session whereas rice husk ash is an affordable waste produced in the region. An extensive experimental work, which includes unconfined compression, durability and free swell tests were carried out considering different dry unit weights, lime content and the addition of 20% of rice husk ash. The results showed that the addition of rice husk ash increases the resistance and durability of the material and decreases the expansion of this, moreover, the specimens cured at a temperature of 40 ± 2 °C showed higher resistance, better durability and lower expansion compared to those cured at the normalized temperature of 23 ± 2 °C.Keywords: durability, expansion, lime stabilization, rice husk ash, sulfate
Procedia PDF Downloads 1231747 Layer-Level Feature Aggregation Network for Effective Semantic Segmentation of Fine-Resolution Remote Sensing Images
Authors: Wambugu Naftaly, Ruisheng Wang, Zhijun Wang
Abstract:
Models based on convolutional neural networks (CNNs), in conjunction with Transformer, have excelled in semantic segmentation, a fundamental task for intelligent Earth observation using remote sensing (RS) imagery. Nonetheless, tokenization in the Transformer model undermines object structures and neglects inner-patch local information, whereas CNNs are unable to simulate global semantics due to limitations inherent in their convolutional local properties. The integration of the two methodologies facilitates effective global-local feature aggregation and interactions, potentially enhancing segmentation results. Inspired by the merits of CNNs and Transformers, we introduce a layer-level feature aggregation network (LLFA-Net) to address semantic segmentation of fine-resolution remote sensing (FRRS) images for land cover classification. The simple yet efficient system employs a transposed unit that hierarchically utilizes dense high-level semantics and sufficient spatial information from various encoder layers through a layer-level feature aggregation module (LLFAM) and models global contexts using structured Transformer blocks. Furthermore, the decoder aggregates resultant features to generate rich semantic representation. Extensive experiments on two public land cover datasets demonstrate that our proposed framework exhibits competitive performance relative to the most recent frameworks in semantic segmentation.Keywords: land cover mapping, semantic segmentation, remote sensing, vision transformer networks, deep learning
Procedia PDF Downloads 111746 Medical Image Watermark and Tamper Detection Using Constant Correlation Spread Spectrum Watermarking
Authors: Peter U. Eze, P. Udaya, Robin J. Evans
Abstract:
Data hiding can be achieved by Steganography or invisible digital watermarking. For digital watermarking, both accurate retrieval of the embedded watermark and the integrity of the cover image are important. Medical image security in Teleradiology is one of the applications where the embedded patient record needs to be extracted with accuracy as well as the medical image integrity verified. In this research paper, the Constant Correlation Spread Spectrum digital watermarking for medical image tamper detection and accurate embedded watermark retrieval is introduced. In the proposed method, a watermark bit from a patient record is spread in a medical image sub-block such that the correlation of all watermarked sub-blocks with a spreading code, W, would have a constant value, p. The constant correlation p, spreading code, W and the size of the sub-blocks constitute the secret key. Tamper detection is achieved by flagging any sub-block whose correlation value deviates by more than a small value, ℇ, from p. The major features of our new scheme include: (1) Improving watermark detection accuracy for high-pixel depth medical images by reducing the Bit Error Rate (BER) to Zero and (2) block-level tamper detection in a single computational process with simultaneous watermark detection, thereby increasing utility with the same computational cost.Keywords: Constant Correlation, Medical Image, Spread Spectrum, Tamper Detection, Watermarking
Procedia PDF Downloads 1961745 Rare-Earth Ions Doped Lithium Niobate Crystals: Luminescence and Raman Spectroscopy
Authors: Ninel Kokanyan, Edvard Kokanyan, Anush Movsesyan, Marc D. Fontana
Abstract:
Lithium Niobate (LN) is one of the widely used ferroelectrics having a wide number of applications such as phase-conjugation, holographic storage, frequency doubling, SAW sensors. Furthermore, the possibility of doping with rare-earth ions leads to new laser applications. Ho and Tm dopants seem interesting due to laser emission obtained at around 2 µm. Raman spectroscopy is a powerful spectroscopic technique providing a possibility to obtain a number of information about physicochemical and also optical properties of a given material. Polarized Raman measurements were carried out on Ho and Tm doped LN crystals with excitation wavelengths of 532nm and 785nm. In obtained Raman anti-Stokes spectra, we detect expected modes according to Raman selection rules. In contrast, Raman Stokes spectra are significantly different compared to what is expected by selection rules. Additional forbidden lines are detected. These lines have quite high intensity and are well defined. Moreover, the intensity of mentioned additional lines increases with an increase of Ho or Tm concentrations in the crystal. These additional lines are attributed to emission lines reflecting the photoluminescence spectra of these crystals. It means that in our case we were able to detect, within a very good resolution, in the same Stokes spectrum, the transitions between the electronic states, and the vibrational states as well. The analysis of these data is reported as a function of Ho and Tm content, for different polarizations and wavelengths, of the incident laser beam. Results also highlight additional information about π and σ polarizations of crystals under study.Keywords: lithium niobate, Raman spectroscopy, luminescence, rare-earth ions doped lithium niobate
Procedia PDF Downloads 2211744 Recovery of Rare Earths and Scandium from in situ Leaching Solutions
Authors: Maxim S. Botalov, Svetlana М. Titova, Denis V. Smyshlyaev, Grigory M. Bunkov, Evgeny V. Kirillov, Sergey V. Kirillov, Maxim A. Mashkovtsev, Vladimir N. Rychkov
Abstract:
In uranium production, in-situ leaching (ISL) with its relatively low cost has become an important technology. As the orebody containing uranium most often contains a considerable value of other metals, particularly rare earth metals it has rendered feasible to recover the REM from the barren ISL solutions, from which the major uranium content has been removed. Ural Federal University (UrFU, Ekaterinburg, Russia) have performed joint research on the development of industrial technologies for the extraction of REM and Scandium compounds from Uranium ISL solutions. Leaching experiments at UrFU have been supported with multicomponent solution model. The experimental work combines solvent extraction with advanced ion exchange methodology in a pilot facility capable of treating 500 kg/hr of solids. The pilot allows for the recovery of a 99% concentrate of scandium oxide and collective concentrate with over 50 % REM content, with further recovery of heavy and light REM concentrates (99%).Keywords: extraction, ion exchange, rare earth elements, scandium
Procedia PDF Downloads 2331743 Probabilistic and Stochastic Analysis of a Retaining Wall for C-Φ Soil Backfill
Authors: André Luís Brasil Cavalcante, Juan Felix Rodriguez Rebolledo, Lucas Parreira de Faria Borges
Abstract:
A methodology for the probabilistic analysis of active earth pressure on retaining wall for c-Φ soil backfill is described in this paper. The Rosenblueth point estimate method is used to measure the failure probability of a gravity retaining wall. The basic principle of this methodology is to use two point estimates, i.e., the standard deviation and the mean value, to examine a variable in the safety analysis. The simplicity of this framework assures to its wide application. For the calculation is required 2ⁿ repetitions during the analysis, since the system is governed by n variables. In this study, a probabilistic model based on the Rosenblueth approach for the computation of the overturning probability of failure of a retaining wall is presented. The obtained results have shown the advantages of this kind of models in comparison with the deterministic solution. In a relatively easy way, the uncertainty on the wall and fill parameters are taken into account, and some practical results can be obtained for the retaining structure design.Keywords: retaining wall, active earth pressure, backfill, probabilistic analysis
Procedia PDF Downloads 4181742 Optimizing Foaming Agents by Air Compression to Unload a Liquid Loaded Gas Well
Authors: Mhenga Agneta, Li Zhaomin, Zhang Chao
Abstract:
When velocity is high enough, gas can entrain fluid and carry to the surface, but as time passes by, velocity drops to a critical point where fluids will start to hold up in the tubing and cause liquid loading which prevents gas production and may lead to the death of the well. Foam injection is widely used as one of the methods to unload liquid. Since wells have different characteristics, it is not guaranteed that foam can be applied in all of them and bring successful results. This research presents a technology to optimize the efficiency of foam to unload liquid by air compression. Two methods are used to explain optimization; (i) mathematical formulas are used to solve and explain the myth of how density and critical velocity could be minimized when air is compressed into foaming agents, then the relationship between flow rates and pressure increase which would boost up the bottom hole pressure and increase the velocity to lift liquid to the surface. (ii) Experiments to test foam carryover capacity and stability as a function of time and surfactant concentration whereby three surfactants anionic sodium dodecyl sulfate (SDS), nonionic Triton 100 and cationic hexadecyltrimethylammonium bromide (HDTAB) were probed. The best foaming agents were injected to lift liquid loaded in a created vertical well model of 2.5 cm diameter and 390 cm high steel tubing covered by a transparent glass casing of 5 cm diameter and 450 cm high. The results show that, after injecting foaming agents, liquid unloading was successful by 75%; however, the efficiency of foaming agents to unload liquid increased by 10% with an addition of compressed air at a ratio of 1:1. Measured values and calculated values were compared and brought about ± 3% difference which is a good number. The successful application of the technology indicates that engineers and stakeholders could bring water flooded gas wells back to production with optimized results by firstly paying attention to the type of surfactants (foaming agents) used, concentration of surfactants, flow rates of the injected surfactants then compressing air to the foaming agents at a proper ratio.Keywords: air compression, foaming agents, gas well, liquid loading
Procedia PDF Downloads 1351741 Early Detection of Major Earthquakes Using Broadband Accelerometers
Authors: Umberto Cerasani, Luca Cerasani
Abstract:
Methods for earthquakes forecasting have been intensively investigated in the last decades, but there is still no universal solution agreed by seismologists. Rock failure is most often preceded by a tiny elastic movement in the failure area and by the appearance of micro-cracks. These micro-cracks could be detected at the soil surface and represent useful earth-quakes precursors. The aim of this study was to verify whether tiny raw acceleration signals (in the 10⁻¹ to 10⁻⁴ cm/s² range) prior to the arrival of main primary-waves could be exploitable and related to earthquakes magnitude. Mathematical tools such as Fast Fourier Transform (FFT), moving average and wavelets have been applied on raw acceleration data available on the ITACA web site, and the study focused on one of the most unpredictable earth-quakes, i.e., the August 24th, 2016 at 01H36 one that occurred in the central Italy area. It appeared that these tiny acceleration signals preceding main P-waves have different patterns both on frequency and time domains for high magnitude earthquakes compared to lower ones.Keywords: earthquake, accelerometer, earthquake forecasting, seism
Procedia PDF Downloads 1451740 Long-Term Follow-Up of Dynamic Balance, Pain and Functional Performance in Cruciate Retaining, Posterior Stabilized Total Knee Arthroplasty
Authors: Ahmed R. Z. Baghdadi, Mona H. Gamal Eldein
Abstract:
Background: With the perceived pain and poor function experienced following knee arthroplasty, patients usually feel unsatisfied. Yet, a controversy still persists on the appropriate operative technique that doesn’t affect proprioception much. Purpose: This study compared the effects of Cruciate Retaining (CR) and Posterior Stabilized (PS) total knee arthroplasty (TKA on dynamic balance, pain and functional performance following rehabilitation. Methods: Thirty patients with CRTKA (group I), thirty with PSTKA (group II) and fifteen indicated for arthroplasty but weren’t operated on yet (group III) participated in the study. The mean age was 54.53±3.44, 55.13±3.48 and 55.33±2.32 years and BMI 35.7±3.03, 35.7±1.99 and 35.73±1.03 kg/m2 for group I, II, and III respectively. The Berg Balance Scale (BBS), WOMAC pain subscale and Timed-Up-and-Go (TUG) and Stair-Climbing (SC) tests were used for assessment. Assessments were conducted four weeks pre- and post-operatively, three, six and twelve months post-operatively with the control group being assessed at the same time intervals. The post-operative rehabilitation involved hospitalization (1st week), home-based (2nd-4th weeks), and outpatient clinic (5th-12th weeks) programs, follow-up to all groups for twelve months. Results: The Mixed design MANOVA revealed that group I had significantly lower pain scores and SC time compared with group II three, six and twelve months post-operatively. Moreover, the BBS scores increased significantly and the pain scores and TUG and SC time decreased significantly six months post-operatively compared with four weeks pre- and post-operatively and three months post-operatively in group I and II with the opposite being true four weeks post-operatively. But no significant differences in BBS scores, pain scores and TUG and SC time between six and twelve months post-operatively in group I and II. Interpretation/Conclusion: CRTKA is preferable to PSTKA, possibly due to the preserved human proprioceptors in the un-excised PCL.Keywords: dynamic balance, functional performance, knee arthroplasty, long-term
Procedia PDF Downloads 4111739 Aerodynamic Design Optimization Technique for a Tube Capsule That Uses an Axial Flow Air Compressor and an Aerostatic Bearing
Authors: Ahmed E. Hodaib, Muhammed A. Hashem
Abstract:
High-speed transportation has become a growing concern. To increase high-speed efficiencies and minimize power consumption of a vehicle, we need to eliminate the friction with the ground and minimize the aerodynamic drag acting on the vehicle. Due to the complexity and high power requirements of electromagnetic levitation, we make use of the air in front of the capsule, that produces the majority of the drag, to compress it in two phases and inject a proportion of it through small nozzles to make a high-pressure air cushion to levitate the capsule. The tube is partially-evacuated so that the air pressure is optimized for maximum compressor effectiveness, optimum tube size, and minimum vacuum pump power consumption. The total relative mass flow rate of the tube air is divided into two fractions. One is by-passed to flow over the capsule body, ensuring that no chocked flow takes place. The other fraction is sucked by the compressor where it is diffused to decrease the Mach number (around 0.8) to be suitable for the compressor inlet. The air is then compressed and intercooled, then split. One fraction is expanded through a tail nozzle to contribute to generating thrust. The other is compressed again. Bleed from the two compressors is used to maintain a constant air pressure in an air tank. The air tank is used to supply air for levitation. Dividing the total mass flow rate increases the achievable speed (Kantrowitz limit), and compressing it decreases the blockage of the capsule. As a result, the aerodynamic drag on the capsule decreases. As the tube pressure decreases, the drag decreases and the capsule power requirements decrease, however, the vacuum pump consumes more power. That’s why Design optimization techniques are to be used to get the optimum values for all the design variables given specific design inputs. Aerodynamic shape optimization, Capsule and tube sizing, compressor design, diffuser and nozzle expander design and the effect of the air bearing on the aerodynamics of the capsule are to be considered. The variations of the variables are to be studied for the change of the capsule velocity and air pressure.Keywords: tube-capsule, hyperloop, aerodynamic design optimization, air compressor, air bearing
Procedia PDF Downloads 3301738 Space Tourism Pricing Model Revolution from Time Independent Model to Time-Space Model
Authors: Kang Lin Peng
Abstract:
Space tourism emerged in 2001 and became famous in 2021, following the development of space technology. The space market is twisted because of the excess demand. Space tourism is currently rare and extremely expensive, with biased luxury product pricing, which is the seller’s market that consumers can not bargain with. Spaceship companies such as Virgin Galactic, Blue Origin, and Space X have been charged space tourism prices from 200 thousand to 55 million depending on various heights in space. There should be a reasonable price based on a fair basis. This study aims to derive a spacetime pricing model, which is different from the general pricing model on the earth’s surface. We apply general relativity theory to deduct the mathematical formula for the space tourism pricing model, which covers the traditional time-independent model. In the future, the price of space travel will be different from current flight travel when space travel is measured in lightyear units. The pricing of general commodities mainly considers the general equilibrium of supply and demand. The pricing model considers risks and returns with the dependent time variable as acceptable when commodities are on the earth’s surface, called flat spacetime. Current economic theories based on the independent time scale in the flat spacetime do not consider the curvature of spacetime. Current flight services flying the height of 6, 12, and 19 kilometers are charging with a pricing model that measures time coordinate independently. However, the emergence of space tourism is flying heights above 100 to 550 kilometers that have enlarged the spacetime curvature, which means tourists will escape from a zero curvature on the earth’s surface to the large curvature of space. Different spacetime spans should be considered in the pricing model of space travel to echo general relativity theory. Intuitively, this spacetime commodity needs to consider changing the spacetime curvature from the earth to space. We can assume the value of each spacetime curvature unit corresponding to the gradient change of each Ricci or energy-momentum tensor. Then we know how much to spend by integrating the spacetime from the earth to space. The concept is adding a price p component corresponding to the general relativity theory. The space travel pricing model degenerates into a time-independent model, which becomes a model of traditional commodity pricing. The contribution is that the deriving of the space tourism pricing model will be a breakthrough in philosophical and practical issues for space travel. The results of the space tourism pricing model extend the traditional time-independent flat spacetime mode. The pricing model embedded spacetime as the general relativity theory can better reflect the rationality and accuracy of space travel on the universal scale. The universal scale from independent-time scale to spacetime scale will bring a brand-new pricing concept for space traveling commodities. Fair and efficient spacetime economics will also bring to humans’ travel when we can travel in lightyear units in the future.Keywords: space tourism, spacetime pricing model, general relativity theory, spacetime curvature
Procedia PDF Downloads 1291737 Design of a Telemetry, Tracking, and Command Radio-Frequency Receiver for Small Satellites Based on Commercial Off-The-Shelf Components
Authors: A. Lovascio, A. D’Orazio, V. Centonze
Abstract:
From several years till now the aerospace industry is developing more and more small satellites for Low-Earth Orbit (LEO) missions. Such satellites have a low cost of making and launching since they have a size and weight smaller than other types of satellites. However, because of size limitations, small satellites need integrated electronic equipment based on digital logic. Moreover, the LEOs require telecommunication modules with high throughput to transmit to earth a big amount of data in a short time. In order to meet such requirements, in this paper we propose a Telemetry, Tracking & Command module optimized through the use of the Commercial Off-The-Shelf components. The proposed approach exploits the major flexibility offered by these components in reducing costs and optimizing the performance. The method has been applied in detail for the design of the front-end receiver, which has a low noise figure (1.5 dB) and DC power consumption (smaller than 2 W). Such a performance is particularly attractive since it allows fulfilling the energy budget stringent constraints that are typical for LEO small platforms.Keywords: COTS, LEO, small-satellite, TT&C
Procedia PDF Downloads 1311736 Task Based Functional Connectivity within Reward Network in Food Image Viewing Paradigm Using Functional MRI
Authors: Preetham Shankapal, Jill King, Kori Murray, Corby Martin, Paula Giselman, Jason Hicks, Owen Carmicheal
Abstract:
Activation of reward and satiety networks in the brain while processing palatable food cues, as well as functional connectivity during rest has been studied using functional Magnetic Resonance Imaging of the brain in various obesity phenotypes. However, functional connectivity within the reward and satiety network during food cue processing is understudied. 14 obese individuals underwent two fMRI scans during viewing of Macronutrient Picture System images. Each scan included two blocks of images of High Sugar/High Fat (HSHF), High Carbohydrate/High Fat (HCHF), Low Sugar/Low Fat (LSLF) and also non-food images. Seed voxels within seven food reward relevant ROIs: Insula, putamen and cingulate, precentral, parahippocampal, medial frontal and superior temporal gyri were isolated based on a prior meta-analysis. Beta series correlation for task-related functional connectivity between these seed voxels and the rest of the brain was computed. Voxel-level differences in functional connectivity were calculated between: first and the second scan; individuals who saw novel (N=7) vs. Repeated (N=7) images in the second scan; and between the HC/HF, HSHF blocks vs LSLF and non-food blocks. Computations and analysis showed that during food image viewing, reward network ROIs showed significant functional connectivity with each other and with other regions responsible for attentional and motor control, including inferior parietal lobe and precentral gyrus. These functional connectivity values were heightened among individuals who viewed novel HS/HF images in the second scan. In the second scan session, functional connectivity was reduced within the reward network but increased within attention, memory and recognition regions, suggesting habituation to reward properties and increased recollection of previously viewed images. In conclusion it can be inferred that Functional Connectivity within reward network and between reward and other brain regions, varies by important experimental conditions during food photography viewing, including habituation to shown foods.Keywords: fMRI, functional connectivity, task-based, beta series correlation
Procedia PDF Downloads 273