Search results for: Three Dimensional Modified Quadratic Congruence/Modified Prime (3-D MQC/MP) code
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6236

Search results for: Three Dimensional Modified Quadratic Congruence/Modified Prime (3-D MQC/MP) code

806 Machine Learning for Exoplanetary Habitability Assessment

Authors: King Kumire, Amos Kubeka

Abstract:

The synergy of machine learning and astronomical technology advancement is giving rise to the new space age, which is pronounced by better habitability assessments. To initiate this discussion, it should be recorded for definition purposes that the symbiotic relationship between astronomy and improved computing has been code-named the Cis-Astro gateway concept. The cosmological fate of this phrase has been unashamedly plagiarized from the cis-lunar gateway template and its associated LaGrange points which act as an orbital bridge to the moon from our planet Earth. However, for this study, the scientific audience is invited to bridge toward the discovery of new habitable planets. It is imperative to state that cosmic probes of this magnitude can be utilized as the starting nodes of the astrobiological search for galactic life. This research can also assist by acting as the navigation system for future space telescope launches through the delimitation of target exoplanets. The findings and the associated platforms can be harnessed as building blocks for the modeling of climate change on planet earth. The notion that if the human genus exhausts the resources of the planet earth or there is a bug of some sort that makes the earth inhabitable for humans explains the need to find an alternative planet to inhabit. The scientific community, through interdisciplinary discussions of the International Astronautical Federation so far has the common position that engineers can reduce space mission costs by constructing a stable cis-lunar orbit infrastructure for refilling and carrying out other associated in-orbit servicing activities. Similarly, the Cis-Astro gateway can be envisaged as a budget optimization technique that models extra-solar bodies and can facilitate the scoping of future mission rendezvous. It should be registered as well that this broad and voluminous catalog of exoplanets shall be narrowed along the way using machine learning filters. The gist of this topic revolves around the indirect economic rationale of establishing a habitability scoping platform.

Keywords: machine-learning, habitability, exoplanets, supercomputing

Procedia PDF Downloads 74
805 Machine Learning for Exoplanetary Habitability Assessment

Authors: King Kumire, Amos Kubeka

Abstract:

The synergy of machine learning and astronomical technology advancement is giving rise to the new space age, which is pronounced by better habitability assessments. To initiate this discussion, it should be recorded for definition purposes that the symbiotic relationship between astronomy and improved computing has been code-named the Cis-Astro gateway concept. The cosmological fate of this phrase has been unashamedly plagiarized from the cis-lunar gateway template and its associated LaGrange points which act as an orbital bridge to the moon from our planet Earth. However, for this study, the scientific audience is invited to bridge toward the discovery of new habitable planets. It is imperative to state that cosmic probes of this magnitude can be utilized as the starting nodes of the astrobiological search for galactic life. This research can also assist by acting as the navigation system for future space telescope launches through the delimitation of target exoplanets. The findings and the associated platforms can be harnessed as building blocks for the modeling of climate change on planet earth. The notion that if the human genus exhausts the resources of the planet earth or there is a bug of some sort that makes the earth inhabitable for humans explains the need to find an alternative planet to inhabit. The scientific community, through interdisciplinary discussions of the International Astronautical Federation so far, has the common position that engineers can reduce space mission costs by constructing a stable cis-lunar orbit infrastructure for refilling and carrying out other associated in-orbit servicing activities. Similarly, the Cis-Astro gateway can be envisaged as a budget optimization technique that models extra-solar bodies and can facilitate the scoping of future mission rendezvous. It should be registered as well that this broad and voluminous catalog of exoplanets shall be narrowed along the way using machine learning filters. The gist of this topic revolves around the indirect economic rationale of establishing a habitability scoping platform.

Keywords: exoplanets, habitability, machine-learning, supercomputing

Procedia PDF Downloads 90
804 Time-Domain Analysis Approaches of Soil-Structure Interaction: A Comparative Study

Authors: Abdelrahman Taha, Niloofar Malekghaini, Hamed Ebrahimian, Ramin Motamed

Abstract:

This paper compares the substructure and direct methods for soil-structure interaction (SSI) analysis in the time domain. In the substructure SSI method, the soil domain is replaced by a set of springs and dashpots, also referred to as the impedance function, derived through the study of the behavior of a massless rigid foundation. The impedance function is inherently frequency dependent, i.e., it varies as a function of the frequency content of the structural response. To use the frequency-dependent impedance function for time-domain SSI analysis, the impedance function is approximated at the fundamental frequency of the structure-soil system. To explore the potential limitations of the substructure modeling process, a two-dimensional reinforced concrete frame structure is modeled using substructure and direct methods in this study. The results show discrepancies between the simulated responses of the substructure and the direct approaches. To isolate the effects of higher modal responses, the same study is repeated using a harmonic input motion, in which a similar discrepancy is still observed between the substructure and direct approaches. It is concluded that the main source of discrepancy between the substructure and direct SSI approaches is likely attributed to the way the impedance functions are calculated, i.e., assuming a massless rigid foundation without considering the presence of the superstructure. Hence, a refined impedance function, considering the presence of the superstructure, shall be developed. This refined impedance function is expected to significantly improve the simulation accuracy of the substructure approach for structural systems whose behavior is dominated by the fundamental mode response.

Keywords: direct approach, impedance function, soil-structure interaction, substructure approach

Procedia PDF Downloads 102
803 A Case Study on the Seismic Performance Assessment of the High-Rise Setback Tower Under Multiple Support Excitations on the Basis of TBI Guidelines

Authors: Kamyar Kildashti, Rasoul Mirghaderi

Abstract:

This paper describes the three-dimensional seismic performance assessment of a high-rise steel moment-frame setback tower, designed and detailed per the 2010 ASCE7, under multiple support excitations. The vulnerability analyses are conducted based on nonlinear history analyses under a set of multi-directional strong ground motion records which are scaled to design-based site-specific spectrum in accordance with ASCE41-13. Spatial variation of input motions between far distant supports of each part of the tower is considered by defining time lag. Plastic hinge monotonic and cyclic behavior for prequalified steel connections, panel zones, as well as steel columns is obtained from predefined values presented in TBI Guidelines, PEER/ATC72 and FEMA P440A to include stiffness and strength degradation. Inter-story drift ratios, residual drift ratios, as well as plastic hinge rotation demands under multiple support excitations, are compared to those obtained from uniform support excitations. Performance objectives based on acceptance criteria declared by TBI Guidelines are compared between uniform and multiple support excitations. The results demonstrate that input motion discrepancy results in detrimental effects on the local and global response of the tower.

Keywords: high-rise building, nonlinear time history analysis, multiple support excitation, performance-based design

Procedia PDF Downloads 269
802 Use of Two-Dimensional Hydraulics Modeling for Design of Erosion Remedy

Authors: Ayoub. El Bourtali, Abdessamed.Najine, Amrou Moussa. Benmoussa

Abstract:

One of the main goals of river engineering is river training, which is defined as controlling and predicting the behavior of a river. It is taking effective measurements to eliminate all related risks and thus improve the river system. In some rivers, the riverbed continues to erode and degrade; therefore, equilibrium will never be reached. Generally, river geometric characteristics and riverbed erosion analysis are some of the most complex but critical topics in river engineering and sediment hydraulics; riverbank erosion is the second answering process in hydrodynamics, which has a major impact on the ecological chain and socio-economic process. This study aims to integrate the new computer technology that can analyze erosion and hydraulic problems through computer simulation and modeling. Choosing the right model remains a difficult and sensitive job for field engineers. This paper makes use of the 5.0.4 version of the HEC-RAS model. The river section is adopted according to the gauged station and the proximity of the adjustment. In this work, we will demonstrate how 2D hydraulic modeling helped clarify the design and cover visuals to set up depth and velocities at riverbanks and throughout advanced structures. The hydrologic engineering center's-river analysis system (HEC-RAS) 2D model was used to create a hydraulic study of the erosion model. The geometric data were generated from the 12.5-meter x 12.5-meter resolution digital elevation model. In addition to showing eroded or overturned river sections, the model output also shows patterns of riverbank changes, which can help us reduce problems caused by erosion.

Keywords: 2D hydraulics model, erosion, floodplain, hydrodynamic, HEC-RAS, riverbed erosion, river morphology, resolution digital data, sediment

Procedia PDF Downloads 174
801 Virtual Reality as a Method in Transformative Learning: A Strategy to Reduce Implicit Bias

Authors: Cory A. Logston

Abstract:

It is imperative researchers continue to explore every transformative strategy to increase empathy and awareness of racial bias. Racism is a social and political concept that uses stereotypical ideology to highlight racial inequities. Everyone has biases they may not be aware of toward disparate out-groups. There is some form of racism in every profession; doctors, lawyers, and teachers are not immune. There have been numerous successful and unsuccessful strategies to motivate and transform an individual’s unconscious biased attitudes. One method designed to induce a transformative experience and identify implicit bias is virtual reality (VR). VR is a technology designed to transport the user to a three-dimensional environment. In a virtual reality simulation, the viewer is immersed in a realistic interactive video taking on the perspective of a Black man. The viewer as the character experiences discrimination in various life circumstances growing up as a child into adulthood. For instance, the prejudice felt in school, as an adolescent encountering the police and false accusations in the workplace. Current research suggests that an immersive VR simulation can enhance self-awareness and become a transformative learning experience. This study uses virtual reality immersion and transformative learning theory to create empathy and identify any unintentional racial bias. Participants, White teachers, will experience a VR immersion to create awareness and identify implicit biases regarding Black students. The desired outcome provides a springboard to reconceptualize their own implicit bias. Virtual reality is gaining traction in the research world and promises to be an effective tool in the transformative learning process.

Keywords: empathy, implicit bias, transformative learning, virtual reality

Procedia PDF Downloads 178
800 Construction of Submerged Aquatic Vegetation Index through Global Sensitivity Analysis of Radiative Transfer Model

Authors: Guanhua Zhou, Zhongqi Ma

Abstract:

Submerged aquatic vegetation (SAV) in wetlands can absorb nitrogen and phosphorus effectively to prevent the eutrophication of water. It is feasible to monitor the distribution of SAV through remote sensing, but for the reason of weak vegetation signals affected by water body, traditional terrestrial vegetation indices are not applicable. This paper aims at constructing SAV index to enhance the vegetation signals and distinguish SAV from water body. The methodology is as follows: (1) select the bands sensitive to the vegetation parameters based on global sensitivity analysis of SAV canopy radiative transfer model; (2) take the soil line concept as reference, analyze the distribution of SAV and water reflectance simulated by SAV canopy model and semi-analytical water model in the two-dimensional space built by different sensitive bands; (3)select the band combinations which have better separation performance between SAV and water, and use them to build the SAVI indices in the form of normalized difference vegetation index(NDVI); (4)analyze the sensitivity of indices to the water and vegetation parameters, choose the one more sensitive to vegetation parameters. It is proved that index formed of the bands with central wavelengths in 705nm and 842nm has high sensitivity to chlorophyll content in leaves while it is less affected by water constituents. The model simulation shows a general negative, little correlation of SAV index with increasing water depth. Moreover, the index enhances capabilities in separating SAV from water compared to NDVI. The SAV index is expected to have potential in parameter inversion of wetland remote sensing.

Keywords: global sensitivity analysis, radiative transfer model, submerged aquatic vegetation, vegetation indices

Procedia PDF Downloads 239
799 Perceptions of Teachers toward Inclusive Education Focus on Hearing Impairment

Authors: Chalise Kiran

Abstract:

The prime idea of inclusive education is to mainstream every child in education. However, it will be challenging for implementation when there are policy and practice gaps. It will be even more challenging when children have disabilities. Generally, the focus will be on the policy gap, but the problem may not always be with policy. The proper practice could be a challenge in the countries like Nepal. In determining practice, the teachers’ perceptions toward inclusive will play a vital role. Nepal has categorized disability in 7 types (physical, visual, hearing, vision/hearing, speech, mental, and multiple). Out of these, hearing impairment is the study realm. In the context of a limited number of researches on children with disabilities and rare researches on CWHI and their education in Nepal, this study is a pioneering effort in knowing basically the problems and challenges of CWHI focused on inclusive education in the schools including gaps and barriers in its proper implementation. Philosophically, the paradigm of the study is post-positivism. In the post-positivist worldview, the quantitative approach with the description of the situation and inferential relationship are revealed out in the study. This is related to the natural model of objective reality. The data were collected from an individual survey with the teachers and head teachers of 35 schools in Nepal. The survey questionnaire was prepared and filled by the respondents from the schools where the CWHI study in 7 provincial 20 districts of Nepal. Through these considerations, the perceptions of CWHI focused inclusive education were explored in the study. The data were analyzed using both descriptive and inferential tools on which the Likert scale-based analysis was done for descriptive analysis, and chi-square mathematical tool was used to know the significant relationship between dependent variables and independent variables. The descriptive analysis showed that the majority of teachers have positive perceptions toward implementing CWHI focused inclusive education, and the majority of them have positive perceptions toward CWHI focused inclusive education, though there are some problems and challenges. The study has found out the major challenges and problems categorically. Some of them are: a large number of students in a single class; availability of generic textbooks for CWHI and no availability of textbooks to all students; less opportunity for teachers to acquire knowledge on CWHI; not adequate teachers in the schools; no flexibility in the curriculum; less information system in schools; no availability of educational consular; disaster-prone students; no child abuse control strategy; no disabled-friendly schools; no free health check-up facility; no participation of the students in school activities and in child clubs and so on. By and large, it is found that teachers’ age, gender, years of experience, position, employment status, and disability with him or her show no statistically significant relation to successfully implement CWHI focused inclusive education and perceptions to CWHI focused inclusive education in schools. However, in some of the cases, the set null hypothesis was rejected, and some are completely retained. The study has suggested policy implications, implications for educational authority, and implications for teachers and parents categorically.

Keywords: children with hearing impairment, disability, inclusive education, perception

Procedia PDF Downloads 97
798 Orthogonal Metal Cutting Simulation of Steel AISI 1045 via Smoothed Particle Hydrodynamic Method

Authors: Seyed Hamed Hashemi Sohi, Gerald Jo Denoga

Abstract:

Machining or metal cutting is one of the most widely used production processes in industry. The quality of the process and the resulting machined product depends on parameters like tool geometry, material, and cutting conditions. However, the relationships of these parameters to the cutting process are often based mostly on empirical knowledge. In this study, computer modeling and simulation using LS-DYNA software and a Smoothed Particle Hydrodynamic (SPH) methodology, was performed on the orthogonal metal cutting process to analyze three-dimensional deformation of AISI 1045 medium carbon steel during machining. The simulation was performed using the following constitutive models: the Power Law model, the Johnson-Cook model, and the Zerilli-Armstrong models (Z-A). The outcomes were compared against the simulated results obtained by Cenk Kiliçaslan using the Finite Element Method (FEM) and the empirical results of Jaspers and Filice. The analysis shows that the SPH method combined with the Zerilli-Armstrong constitutive model is a viable alternative to simulating the metal cutting process. The tangential force was overestimated by 7%, and the normal force was underestimated by 16% when compared with empirical values. The simulation values for flow stress versus strain at various temperatures were also validated against empirical values. The SPH method using the Z-A model has also proven to be robust against issues of time-scaling. Experimental work was also done to investigate the effects of friction, rake angle and tool tip radius on the simulation.

Keywords: metal cutting, smoothed particle hydrodynamics, constitutive models, experimental, cutting forces analyses

Procedia PDF Downloads 244
797 Predicting Radioactive Waste Glass Viscosity, Density and Dissolution with Machine Learning

Authors: Joseph Lillington, Tom Gout, Mike Harrison, Ian Farnan

Abstract:

The vitrification of high-level nuclear waste within borosilicate glass and its incorporation within a multi-barrier repository deep underground is widely accepted as the preferred disposal method. However, for this to happen, any safety case will require validation that the initially localized radionuclides will not be considerably released into the near/far-field. Therefore, accurate mechanistic models are necessary to predict glass dissolution, and these should be robust to a variety of incorporated waste species and leaching test conditions, particularly given substantial variations across international waste-streams. Here, machine learning is used to predict glass material properties (viscosity, density) and glass leaching model parameters from large-scale industrial data. A variety of different machine learning algorithms have been compared to assess performance. Density was predicted solely from composition, whereas viscosity additionally considered temperature. To predict suitable glass leaching model parameters, a large simulated dataset was created by coupling MATLAB and the chemical reactive-transport code HYTEC, considering the state-of-the-art GRAAL model (glass reactivity in allowance of the alteration layer). The trained models were then subsequently applied to the large-scale industrial, experimental data to identify potentially appropriate model parameters. Results indicate that ensemble methods can accurately predict viscosity as a function of temperature and composition across all three industrial datasets. Glass density prediction shows reliable learning performance with predictions primarily being within the experimental uncertainty of the test data. Furthermore, machine learning can predict glass dissolution model parameters behavior, demonstrating potential value in GRAAL model development and in assessing suitable model parameters for large-scale industrial glass dissolution data.

Keywords: machine learning, predictive modelling, pattern recognition, radioactive waste glass

Procedia PDF Downloads 99
796 Strategic Policy Formulation to Ensure the Atlantic Forest Regeneration

Authors: Ramon F. B. da Silva, Mateus Batistella, Emilio Moran

Abstract:

Although the existence of two Forest Transition (FT) pathways, the economic development and the forest scarcity, there are many contexts that shape the model of FT observed in each particular region. This means that local conditions, such as relief, soil quality, historic land use/cover, public policies, the engagement of society in compliance with legal regulations, and the action of enforcement agencies, represent dimensions which combined, creates contexts that enable forest regeneration. From this perspective we can understand the regeneration process of native vegetation cover in the Paraíba Valley (Forest Atlantic biome), ongoing since the 1960s. This research analyzed public information, land use/cover maps, environmental public policies, and interviewed 17 stakeholders from the Federal and State agencies, municipal environmental and agricultural departments, civil society, farmers, aiming comprehend the contexts behind the forest regeneration in the Paraíba Valley, Sao Paulo State, Brazil. The first policy to protect forest vegetation was the Forest Code n0 4771 of 1965, but this legislation did not promote the increase of forest, just the control of deforestation, not enough to the Atlantic Forest biome that reached its highest pick of degradation in 1985 (8% of Atlantic Forest remnants). We concluded that the Brazilian environmental legislation acted in a strategic way to promote the increase of forest cover (102% of regeneration between 1985 and 2011) from 1993 when the Federal Decree n0 750 declared the initial and advanced stages of secondary succession protected against any kind of exploitation or degradation ensuring the forest regeneration process. The strategic policy formulation was also observed in the Sao Paulo State law n0 6171 of 1988 that prohibited the use of fire to manage agricultural landscape, triggering a process of forest regeneration in formerly pasture areas.

Keywords: forest transition, land abandonment, law enforcement, rural economic crisis

Procedia PDF Downloads 532
795 Polymer Nanostructures Based Catalytic Materials for Energy and Environmental Applications

Authors: S. Ghosh, L. Ramos, A. N. Kouamé, A.-L. Teillout, H. Remita

Abstract:

Catalytic materials have attracted continuous attention due to their promising applications in a variety of energy and environmental applications including clean energy, energy conversion and storage, purification and separation, degradation of pollutants and electrochemical reactions etc. With the advanced synthetic technologies, polymer nanostructures and nanocomposites can be directly synthesized through soft template mediated approach using swollen hexagonal mesophases and modulate the size, morphology, and structure of polymer nanostructures. As an alternative to conventional catalytic materials, one-dimensional PDPB polymer nanostructures shows high photocatalytic activity under visible light for the degradation of pollutants. These photocatalysts are very stable with cycling. Transmission electron microscopy (TEM), and AFM-IR characterizations reveal that the morphology and structure of the polymer nanostructures do not change after photocatalysis. These stable and cheap polymer nanofibers and metal polymer nanocomposites are easy to process and can be reused without appreciable loss of activity. The polymer nanocomposites formed via one pot chemical redox reaction with 3.4 nm Pd nanoparticles on poly(diphenylbutadiyne) (PDPB) nanofibers (30 nm). The reduction of Pd (II) ions is accompanied by oxidative polymerization leading to composites materials. Hybrid Pd/PDPB nanocomposites used as electrode materials for the electrocatalytic oxidation of ethanol without using support of proton exchange Nafion membrane. Hence, these conducting polymer nanofibers and nanocomposites offer the perspective of developing a new generation of efficient photocatalysts for environmental protection and in electrocatalysis for fuel cell applications.

Keywords: conducting polymer, swollen hexagonal mesophases, solar photocatalysis, electrocatalysis, water depollution

Procedia PDF Downloads 367
794 Migrant Labour in Kerala: A Study on Inter-State Migrant Workers

Authors: Arun Perumbilavil Anand

Abstract:

In the recent years, Kerala is witnessing a large inflow of migrants from different parts of the country. Though initially, the migrants were largely from the districts of Tamil Nadu and mostly of seasonal nature, but at a later period, the state started getting migrants from the far-off states like UP, Assam, Bengal, etc. Higher wages for unskilled labour, large opportunities for employment, the reluctance on the part of Kerala workers to do menial and hard physical work, and the shortage of local labour, paradoxically despite the high unemployment rate in the state, led to the massive influx of migrant labourers. This study takes a multi-dimensional overview of migrant labour in Kerala by encompassing factors such as channels of migration, nature of employment contracts entered into and the corresponding wages and benefits obtained by them. The study also analysed the circumstances that led to the large influx of migrants from different states of India. It further makes an attempt to examine the varying dimensions of living and working environment, and also the health conditions of migrants. The study is based on the empirical findings obtained as a result of the primary interviews conducted with migrants in the districts of Palakkad, Malappuram, and Ernakulam. The study concludes by noting that Kerala will inevitably have to depend on migrant labour and is likely to experience heavy in-migration of labour in future, provided that if the existing socioeconomic and demographic situations persist. Since, this is inevitable, the best way before the state is to prepare well in advance to receive and accommodate such migrant labour to lead a comfortable life in a hassle free environment, so that it would definitely play a vital role in further strengthening and sustaining the growth trajectory of not only Kerala’s economy but also the states of origin.

Keywords: Kerala, labour, migration, migrant workers

Procedia PDF Downloads 232
793 Full-Face Hyaluronic Acid Implants Assisted by Artificial Intelligence-Generated Post-treatment 3D Models

Authors: Ciro Cursio, Pio Luigi Cursio, Giulia Cursio, Isabella Chiardi, Luigi Cursio

Abstract:

Introduction: Full-face aesthetic treatments often present a difficult task: since different patients possess different anatomical and tissue characteristics, there is no guarantee that the same treatment will have the same effect on multiple patients; additionally, full-face rejuvenation and beautification treatments require not only a high degree of technical skill but also the ability to choose the right product for each area and a keen artistic eye. Method: We present an artificial intelligence-based algorithm that can generate realistic post-treatment 3D models based on the patient’s requests together with the doctor’s input. These 3-dimensional predictions can be used by the practitioner for two purposes: firstly, they help ensure that the patient and the doctor are completely aligned on the expectations of the treatment; secondly, the doctor can use them as a visual guide, obtaining a natural result that would normally stem from the practitioner's artistic skills. To this end, the algorithm is able to predict injection zones, the type and quantity of hyaluronic acid, the injection depth, and the technique to use. Results: Our innovation consists in providing an objective visual representation of the patient that is helpful in the patient-doctor dialogue. The patient, based on this information, can express her desire to undergo a specific treatment or make changes to the therapeutic plan. In short, the patient becomes an active agent in the choices made before the treatment. Conclusion: We believe that this algorithm will reveal itself as a useful tool in the pre-treatment decision-making process to prevent both the patient and the doctor from making a leap into the dark.

Keywords: hyaluronic acid, fillers, full face, artificial intelligence, 3D

Procedia PDF Downloads 65
792 Numerical Study on the Effects of Truncated Ribs on Film Cooling with Ribbed Cross-Flow Coolant Channel

Authors: Qijiao He, Lin Ye

Abstract:

To evaluate the effect of the ribs on internal structure in film hole and the film cooling performance on outer surface, the numerical study investigates on the effects of rib configuration on the film cooling performance with ribbed cross-flow coolant channel. The base smooth case and three ribbed cases, including the continuous rib case and two cross-truncated rib cases with different arrangement, are studied. The distributions of adiabatic film cooling effectiveness and heat transfer coefficient are obtained under the blowing ratios with the value of 0.5 and 1.0, respectively. A commercial steady RANS (Reynolds-averaged Navier-Stokes) code with realizable k-ε turbulence model and enhanced wall treatment were performed for numerical simulations. The numerical model is validated against available experimental data. The two cross-truncated rib cases produce approximately identical cooling effectiveness compared with the smooth case under lower blowing ratio. The continuous rib case significantly outperforms the other cases. With the increase of blowing ratio, the cases with ribs are inferior to the smooth case, especially in the upstream region. The cross-truncated rib I case produces the highest cooling effectiveness among the studied the ribbed channel case. It is found that film cooling effectiveness deteriorates with the increase of spiral intensity of the cross-flow inside the film hole. Lower spiral intensity leads to a better film coverage and thus results in better cooling effectiveness. The distinct relative merits among the cases at different blowing ratios are explored based on the aforementioned dominant mechanism. With regard to the heat transfer coefficient, the smooth case has higher heat transfer intensity than the ribbed cases under the studied blowing ratios. The laterally-averaged heat transfer coefficient of the cross-truncated rib I case is higher than the cross-truncated rib II case.

Keywords: cross-flow, cross-truncated rib, film cooling, numerical simulation

Procedia PDF Downloads 123
791 Polycode Texts in Communication of Antisocial Groups: Functional and Pragmatic Aspects

Authors: Ivan Potapov

Abstract:

Background: The aim of this paper is to investigate poly code texts in the communication of youth antisocial groups. Nowadays, the notion of a text has numerous interpretations. Besides all the approaches to defining a text, we must take into account semiotic and cultural-semiotic ones. Rapidly developing IT, world globalization, and new ways of coding of information increase the role of the cultural-semiotic approach. However, the development of computer technologies leads also to changes in the text itself. Polycode texts play a more and more important role in the everyday communication of the younger generation. Therefore, the research of functional and pragmatic aspects of both verbal and non-verbal content is actually quite important. Methods and Material: For this survey, we applied the combination of four methods of text investigation: not only intention and content analysis but also semantic and syntactic analysis. Using these methods provided us with information on general text properties, the content of transmitted messages, and each communicants’ intentions. Besides, during our research, we figured out the social background; therefore, we could distinguish intertextual connections between certain types of polycode texts. As the sources of the research material, we used 20 public channels in the popular messenger Telegram and data extracted from smartphones, which belonged to arrested members of antisocial groups. Findings: This investigation let us assert that polycode texts can be characterized as highly intertextual language unit. Moreover, we could outline the classification of these texts based on communicants’ intentions. The most common types of antisocial polycode texts are a call to illegal actions and agitation. What is more, each type has its own semantic core: it depends on the sphere of communication. However, syntactic structure is universal for most of the polycode texts. Conclusion: Polycode texts play important role in online communication. The results of this investigation demonstrate that in some social groups using these texts has a destructive influence on the younger generation and obviously needs further researches.

Keywords: text, polycode text, internet linguistics, text analysis, context, semiotics, sociolinguistics

Procedia PDF Downloads 114
790 A Comparative Study of the Effects of Vibratory Stress Relief and Thermal Aging on the Residual Stress of Explosives Materials

Authors: Xuemei Yang, Xin Sun, Cheng Fu, Qiong Lan, Chao Han

Abstract:

Residual stresses, which can be produced during the manufacturing process of plastic bonded explosive (PBX), play an important role in weapon system security and reliability. Residual stresses can and do change in service. This paper mainly studies the influence of vibratory stress relief (VSR) and thermal aging on residual stress of explosives. Firstly, the residual stress relaxation of PBX via different physical condition of VSR, such as vibration time, amplitude and dynamic strain, were studied by drill-hole technique. The result indicated that the vibratory amplitude, time and dynamic strain had a significant influence on the residual stress relief of PBX. The rate of residual stress relief of PBX increases first and then decreases with the increase of dynamic strain, amplitude and time, because the activation energy is too small to make the PBX yield plastic deformation at first. Then the dynamic strain, time and amplitude exceed a certain threshold, the residual stress changes show the same rule and decrease sharply, this sharply drop of residual stress relief rate may have been caused by over vibration. Meanwhile, the comparison between VSR and thermal aging was also studied. The conclusion is that the reduction ratio of residual stress after VSR process with applicable vibratory parameters could be equivalent to 73% of thermal aging with 7 days. In addition, the density attenuation rate, mechanical property, and dimensional stability with 3 months after VSR process was almost the same compared with thermal aging. However, compared with traditional thermal aging, VSR only takes a very short time, which greatly improves the efficiency of aging treatment for explosive materials. Therefore, the VSR could be a potential alternative technique in the industry of residual stress relaxation of PBX explosives.

Keywords: explosives, residual stresses, thermal aging, vibratory stress relief, VSR

Procedia PDF Downloads 138
789 Shear Behavior of Reinforced Concrete Beams Casted with Recycled Coarse Aggregate

Authors: Salah A. Aly, Mohammed A. Ibrahim, Mostafa M. khttab

Abstract:

The amount of construction and demolition (C&D) waste has increased considerably over the last few decades. From the viewpoint of environmental preservation and effective utilization of resources, crushing C&D concrete waste to produce coarse aggregate (CA) with different replacement percentage for the production of new concrete is one common means for achieving a more environment-friendly concrete. In the study presented herein, the investigation was conducted in two phases. In the first phase, the selection of the materials was carried out and the physical, mechanical and chemical characteristics of these materials were evaluated. Different concrete mixes were designed. The investigation parameter was Recycled Concrete Aggregate (RCA) ratios. The mechanical properties of all mixes were evaluated based on compressive strength and workability results. Accordingly, two mixes have been chosen to be used in the next phase. In the second phase, the study of the structural behavior of the concrete beams was developed. Sixteen beams were casted to investigate the effect of RCA ratios, the shear span to depth ratios and the effect of different locations and reinforcement of openings on the shear behavior of the tested specimens. All these beams were designed to fail in shear. Test results of the compressive strength of concrete indicated that, replacement of natural aggregate by up to 50% recycled concrete aggregates in mixtures with 350 Kg/m3 cement content led to increase of concrete compressive strength. Moreover, the tensile strength and the modulus of elasticity of the specimens with RCA have very close values to those with natural aggregates. The ultimate shear strength of beams with RCA is very close to those with natural aggregates indicating the possibility of using RCA as partial replacement to produce structural concrete elements. The validity of both the Egyptian Code for the design and implementation of Concrete Structures (ECCS) 203-2007 and American Concrete Institute (ACI) 318-2011Codes for estimating the shear strength of the tested RCA beams was investigated. It was found that the codes procedures gives conservative estimates for shear strength.

Keywords: construction and demolition (C&D) waste, coarse aggregate (CA), recycled coarse aggregates (RCA), opening

Procedia PDF Downloads 377
788 Optimization of Polymerase Chain Reaction Condition to Amplify Exon 9 of PIK3CA Gene in Preventing False Positive Detection Caused by Pseudogene Existence in Breast Cancer

Authors: Dina Athariah, Desriani Desriani, Bugi Ratno Budiarto, Abinawanto Abinawanto, Dwi Wulandari

Abstract:

Breast cancer is a regulated by many genes. Defect in PIK3CA gene especially at position of exon 9 (E542K and E545K), called hot spot mutation induce early transformation of breast cells. The early detection of breast cancer based on mutation profile of this hot spot region would be hampered by the existence of pseudogene, marked by its substitution mutation at base 1658 (E545A) and deletion at 1659 that have been previously proven in several cancers. To the best of the authors’ knowledge, until recently no studies have been reported about pseudogene phenomenon in breast cancer. Here, we reported PCR optimization to to obtain true exon 9 of PIK3CA gene from its pseudogene hence increasing the validity of data. Material and methods: two genomic DNA with Dev and En code were used in this experiment. Two pairs of primer were design for Standard PCR method. The size of PCR products for each primer is 200bp and 400bp. While other primer was designed for Nested-PCR followed with DNA sequencing method. For Nested-PCR, we optimized the annealing temperature in first and second run of PCR, and the PCR cycle for first run PCR (15x versus 25x). Result: standard PCR using both primer pairs designed is failed to detect the true PIK3CA gene, appearing a substitution mutation at 1658 and deletion at 1659 of PCR product in sequence chromatogram indicated pseudogene. Meanwhile, Nested-PCR with optimum condition (annealing temperature for the first round at 55oC, annealing temperatung for the second round at 60,7oC with 15x PCR cycles) and could detect the true PIK3CA gene. Dev sample were identified as WT while En sample contain one substitution mutation at position 545 of exon 9, indicating amino acid changing from E to K. For the conclusion, pseudogene also exists in breast cancer and the apllication of optimazed Nested-PCR in this study could detect the true exon 9 of PIK3CA gene.

Keywords: breast cancer, exon 9, hotspot mutation, PIK3CA, pseudogene

Procedia PDF Downloads 227
787 Minimizing Vehicular Traffic via Integrated Land Use Development: A Heuristic Optimization Approach

Authors: Babu Veeregowda, Rongfang Liu

Abstract:

The current traffic impact assessment methodology and environmental quality review process for approval of land development project are conventional, stagnant, and one-dimensional. The environmental review policy and procedure lacks in providing the direction to regulate or seek alternative land uses and sizes that exploits the existing or surrounding elements of built environment (‘4 D’s’ of development – Density, Diversity, Design, and Distance to Transit) or smart growth principles which influence the travel behavior and have a significant effect in reducing vehicular traffic. Additionally, environmental review policy does not give directions on how to incorporate urban planning into the development in ways such as incorporating non-motorized roadway elements such as sidewalks, bus shelters, and access to community facilities. This research developed a methodology to optimize the mix of land uses and sizes using the heuristic optimization process to minimize the auto dependency development and to meet the interests of key stakeholders. A case study of Willets Point Mixed Use Development in Queens, New York, was used to assess the benefits of the methodology. The approved Willets Point Mixed Use project was based on maximum envelop of size and land use type allowed by current conventional urban renewal plans. This paper will also evaluate the parking accumulation for various land uses to explore the potential for shared parking to further optimize the mix of land uses and sizes. This research is very timely and useful to many stakeholders interested in understanding the benefits of integrated land uses and its development.

Keywords: traffic impact, mixed use, optimization, trip generation

Procedia PDF Downloads 197
786 Microbial Degradation of Lignin for Production of Valuable Chemicals

Authors: Fnu Asina, Ivana Brzonova, Keith Voeller, Yun Ji, Alena Kubatova, Evguenii Kozliak

Abstract:

Lignin, a heterogeneous three-dimensional biopolymer, is one of the building blocks of lignocellulosic biomass. Due to its limited chemical reactivity, lignin is currently processed as a low-value by-product in pulp and paper mills. Among various industrial lignins, Kraft lignin represents a major source of by-products generated during the widely employed pulping process across the pulp and paper industry. Therefore, valorization of Kraft lignin holds great potential as this would provide a readily available source of aromatic compounds for various industrial applications. Microbial degradation is well known for using both highly specific ligninolytic enzymes secreted by microorganisms and mild operating conditions compared with conventional chemical approaches. In this study, the degradation of Indulin AT lignin was assessed by comparing the effects of Basidiomycetous fungi (Coriolus versicolour and Trametes gallica) and Actinobacteria (Mycobacterium sp. and Streptomyces sp.) to two commercial laccases, T. versicolour ( ≥ 10 U/mg) and C. versicolour ( ≥ 0.3 U/mg). After 54 days of cultivation, the extent of microbial degradation was significantly higher than that of commercial laccases, reaching a maximum of 38 wt% degradation for C. versicolour treated samples. Lignin degradation was further confirmed by thermal carbon analysis with a five-step temperature protocol. Compared with commercial laccases, a significant decrease in char formation at 850ºC was observed among all microbial-degraded lignins with a corresponding carbon percentage increase from 200ºC to 500ºC. To complement the carbon analysis result, chemical characterization of the degraded products at different stages of the delignification by microorganisms and commercial laccases was performed by Pyrolysis-GC-MS.

Keywords: lignin, microbial degradation, pyrolysis-GC-MS, thermal carbon analysis

Procedia PDF Downloads 396
785 Numerical Analysis for Soil Compaction and Plastic Points Extension in Pile Drivability

Authors: Omid Tavasoli, Mahmoud Ghazavi

Abstract:

A numerical analysis of drivability of piles in different geometry is presented. In this paper, a three-dimensional finite difference analysis for plastic point extension and soil compaction in the effect of pile driving is analyzed. Four pile configurations such as cylindrical pile, fully tapered pile, T-C pile consists of a top tapered segment and a lower cylindrical segment and C-T pile has a top cylindrical part followed by a tapered part are investigated. All piles which driven up to a total penetration depth of 16 m have the same length with equivalent surface area and approximately with identical material volumes. An idealization for pile-soil system in pile driving is considered for this approach. A linear elastic material is assumed to model the vertical pile behaviors and the soil obeys the elasto-plastic constitutive low and its failure is controlled by the Mohr-Coulomb failure criterion. A slip which occurred at the pile-soil contact surfaces along the shaft and the toe in pile driving procedures is simulated with interface elements. All initial and boundary conditions are the same in all analyses. Quiet boundaries are used to prevent wave reflection in the lateral and vertical directions for the soil. The results obtained from numerical analyses were compared with available other numerical data and laboratory tests, indicating a satisfactory agreement. It will be shown that with increasing the angle of taper, the permanent piles toe settlement increase and therefore, the extension of plastic points increase. These are interesting phenomena in pile driving and are on the safe side for driven piles.

Keywords: pile driving, finite difference method, non-uniform piles, pile geometry, pile set, plastic points, soil compaction

Procedia PDF Downloads 468
784 Planktivorous Fish Schooling Responses to Current at Natural and Artificial Reefs

Authors: Matthew Holland, Jason Everett, Martin Cox, Iain Suthers

Abstract:

High spatial-resolution distribution of planktivorous reef fish can reveal behavioural adaptations to optimise the balance between feeding success and predator avoidance. We used a multi-beam echosounder to record bathymetry and the three-dimensional distribution of fish schools associated with natural and artificial reefs. We utilised generalised linear models to assess the distribution, orientation, and aggregation of fish schools relative to the structure, vertical relief, and currents. At artificial reefs, fish schooled more closely to the structure and demonstrated a preference for the windward side, particularly when exposed to strong currents. Similarly, at natural reefs fish demonstrated a preference for windward aspects of bathymetry, particularly when associated with high vertical relief. Our findings suggest that under conditions with stronger current velocity, fish can exercise their preference to remain close to structure for predator avoidance, while still receiving an adequate supply of zooplankton delivered by the current. Similarly, when current velocity is low, fish tend to disperse for better access to zooplankton. As artificial reefs are generally deployed with the goal of creating productivity rather than simply attracting fish from elsewhere, we advise that future artificial reefs be designed as semi-linear arrays perpendicular to the prevailing current, with multiple tall towers. This will facilitate the conversion of dispersed zooplankton into energy for higher trophic levels, enhancing reef productivity and fisheries.

Keywords: artificial reef, current, forage fish, multi-beam, planktivorous fish, reef fish, schooling

Procedia PDF Downloads 141
783 Stable Isotope Ratios Data for Tracing the Origin of Greek Olive Oils and Table Olives

Authors: Efthimios Kokkotos, Kostakis Marios, Beis Alexandros, Angelos Patakas, Antonios Avgeris, Vassilios Triantafyllidis

Abstract:

H, C, and O stable isotope ratios were measured in different olive oils and table olives originating from different regions of Greece. In particular, the stable isotope ratios of different olive oils produced in the Lakonia region (Peloponesse – South Greece) from different varieties, i.e., cvs ‘Athinolia’ and ‘koroneiki’, were determined. Additionally, stable isotope ratios were also measured in different table olives (cvs ‘koroneiki’ and ‘kalamon’) produced in the same region (Messinia). The aim of this study was to provide sufficient isotope ratio data regarding each variety and region of origin that could be used in discriminative studies of oil olives and table olives produced by different varieties in other regions. In total, 97 samples of olive oil (cv ‘Athinolia’ and ‘koroneiki’) and 67 samples of table olives (cvs ‘kalmon’ and ‘koroneiki’) collected during two consecutive sampling periods (2021-2022 and 2022-2023) were measured. The C, H, and O isotope ratios were measured using Isotope Ratio Mass Spectrometry (IRMS), and the results obtained were analyzed using chemometric techniques. The measurements of the isotope ratio analyses were expressed in permille (‰) using the delta δ notation (δ=Rsample/Rstandard-1, where Rsample and Rstandardis represent the isotope ratio of sample and standard). Results indicate that stable isotope ratios of C, H, and O ranged between -28,5+0,45‰, -142,83+2,82‰, 25,86+0,56‰ and -29,78+0,71‰, -143,62+1,4‰, 26,32+0,55‰ in olive oils produced in Lakonia region from ‘Athinolia’ and ‘koroneiki ‘varieties, respectively. The C, H, and O values from table olives originated from Messinia region were -28,58+0,63‰, -138,09+3,27‰, 25,45+0,62‰ and -29,41+0,59‰,-137,67+1,15‰, 24,37+0,6‰ for ‘Kalamon’ and ‘koroneiki’ olives respectively. Acknowledgments: This research has been co-financed by the European Regional Development Fund of the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship and Innovation, under the call RESEARCH—CREATE—INNOVATE (Project code: T2EDK-02637; MIS 5075094, Title: ‘Innovative Methodological Tools for Traceability, Certification and Authenticity Assessment of Olive Oil and Olives’).

Keywords: olive oil, table olives, Isotope ratio, IRMS, geographical origin

Procedia PDF Downloads 36
782 The Rule of Architectural Firms in Enhancing Building Energy Efficiency in Emerging Countries: Processes and Tools Evaluation of Architectural Firms in Egypt

Authors: Mahmoud F. Mohamadin, Ahmed Abdel Malek, Wessam Said

Abstract:

Achieving energy efficient architecture in general, and in emerging countries in particular, is a challenging process that requires the contribution of various governmental, institutional, and individual entities. The rule of architectural design is essential in this process as it is considered as one of the earliest steps on the road to sustainability. Architectural firms have a moral and professional responsibility to respond to these challenges and deliver buildings that consume less energy. This study aims to evaluate the design processes and tools in practice of Egyptian architectural firms based on a limited survey to investigate if their processes and methods can lead to projects that meet the Egyptian Code of Energy Efficiency Improvement. A case study of twenty architectural firms in Cairo was selected and categorized according to their scale; large-scale, medium-scale, and small-scale. A questionnaire was designed and distributed to the firms, and personal meetings with the firms’ representatives took place. The questionnaire answered three main points; the design processes adopted, the usage of performance-based simulation tools, and the usage of BIM tools for energy efficiency purposes. The results of the study revealed that only little percentage of the large-scale firms have clear strategies for building energy efficiency in their building design, however the application is limited to certain project types, or according to the client request. On the other hand, the percentage of medium-scale firms is much less, and it is almost absent in the small-scale ones. This demonstrates the urgent need of enhancing the awareness of the Egyptian architectural design community of the great importance of implementing these methods starting from the early stages of the building design. Finally, the study proposed recommendations for such firms to be able to create a healthy built environment and improve the quality of life in emerging countries.

Keywords: architectural firms, emerging countries, energy efficiency, performance-based simulation tools

Procedia PDF Downloads 265
781 Flood Modeling in Urban Area Using a Well-Balanced Discontinuous Galerkin Scheme on Unstructured Triangular Grids

Authors: Rabih Ghostine, Craig Kapfer, Viswanathan Kannan, Ibrahim Hoteit

Abstract:

Urban flooding resulting from a sudden release of water due to dam-break or excessive rainfall is a serious threatening environment hazard, which causes loss of human life and large economic losses. Anticipating floods before they occur could minimize human and economic losses through the implementation of appropriate protection, provision, and rescue plans. This work reports on the numerical modelling of flash flood propagation in urban areas after an excessive rainfall event or dam-break. A two-dimensional (2D) depth-averaged shallow water model is used with a refined unstructured grid of triangles for representing the urban area topography. The 2D shallow water equations are solved using a second-order well-balanced discontinuous Galerkin scheme. Theoretical test case and three flood events are described to demonstrate the potential benefits of the scheme: (i) wetting and drying in a parabolic basin (ii) flash flood over a physical model of the urbanized Toce River valley in Italy; (iii) wave propagation on the Reyran river valley in consequence of the Malpasset dam-break in 1959 (France); and (iv) dam-break flood in October 1982 at the town of Sumacarcel (Spain). The capability of the scheme is also verified against alternative models. Computational results compare well with recorded data and show that the scheme is at least as efficient as comparable second-order finite volume schemes, with notable efficiency speedup due to parallelization.

Keywords: dam-break, discontinuous Galerkin scheme, flood modeling, shallow water equations

Procedia PDF Downloads 161
780 Understanding Retail Benefits Trade-offs of Dynamic Expiration Dates (DED) Associated with Food Waste

Authors: Junzhang Wu, Yifeng Zou, Alessandro Manzardo, Antonio Scipioni

Abstract:

Dynamic expiration dates (DEDs) play an essential role in reducing food waste in the context of the sustainable cold chain and food system. However, it is unknown for the trades-off in retail benefits when setting an expiration date on fresh food products. This study aims to develop a multi-dimensional decision-making model that integrates DEDs with food waste based on wireless sensor network technology. The model considers the initial quality of fresh food and the change rate of food quality with the storage temperature as cross-independent variables to identify the potential impacts of food waste in retail by applying s DEDs system. The results show that retail benefits from the DEDs system depend on each scenario despite its advanced technology. In the DEDs, the storage temperature of the retail shelf leads to the food waste rate, followed by the change rate of food quality and the initial quality of food products. We found that the DEDs system could reduce food waste when food products are stored at lower temperature areas. Besides, the potential of food savings in an extended replenishment cycle is significantly more advantageous than the fixed expiration dates (FEDs). On the other hand, the information-sharing approach of the DEDs system is relatively limited in improving sustainable assessment performance of food waste in retail and even misleads consumers’ choices. The research provides a comprehensive understanding to support the techno-economic choice of the DEDs associated with food waste in retail.

Keywords: dynamic expiry dates (DEDs), food waste, retail benefits, fixed expiration dates (FEDs)

Procedia PDF Downloads 99
779 Optimizing Fire Tube Boiler Design for Efficient Saturated Steam Production: A Cost-Minimization Approach

Authors: Yoftahe Nigussie Worku

Abstract:

This report unveils a meticulous project focused on the design intricacies of a Fire Tube Boiler tailored for the efficient generation of saturated steam. The overarching objective is to produce 2000kg/h of saturated steam at 12-bar design pressure, achieved through the development of an advanced fire tube boiler. This design is meticulously crafted to harmonize cost-effectiveness and parameter refinement, with a keen emphasis on material selection for component parts, construction materials, and production methods throughout the analytical phases. The analytical process involves iterative calculations, utilizing pertinent formulas to optimize design parameters, including the selection of tube diameters and overall heat transfer coefficients. The boiler configuration incorporates two passes, a strategic choice influenced by tube and shell size considerations. The utilization of heavy oil fuel no. 6, with a higher heating value of 44000kJ/kg and a lower heating value of 41300kJ/kg, results in a fuel consumption of 140.37kg/hr. The boiler achieves an impressive heat output of 1610kW with an efficiency rating of 85.25%. The fluid flow pattern within the boiler adopts a cross-flow arrangement strategically chosen for inherent advantages. Internally, the welding of the tube sheet to the shell, secured by gaskets and welds, ensures structural integrity. The shell design adheres to European Standard code sections for pressure vessels, encompassing considerations for weight, supplementary accessories (lifting lugs, openings, ends, manhole), and detailed assembly drawings. This research represents a significant stride in optimizing fire tube boiler technology, balancing efficiency and safety considerations in the pursuit of enhanced saturated steam production.

Keywords: fire tube, saturated steam, material selection, efficiency

Procedia PDF Downloads 57
778 Effect of 3-Dimensional Knitted Spacer Fabrics Characteristics on Its Thermal and Compression Properties

Authors: Veerakumar Arumugam, Rajesh Mishra, Jiri Militky, Jana Salacova

Abstract:

The thermo-physiological comfort and compression properties of knitted spacer fabrics have been evaluated by varying the different spacer fabric parameters. Air permeability and water vapor transmission of the fabrics were measured using the Textest FX-3300 air permeability tester and PERMETEST. Then thermal behavior of fabrics was obtained by Thermal conductivity analyzer and overall moisture management capacity was evaluated by moisture management tester. Spacer Fabrics compression properties were also tested using Kawabata Evaluation System (KES-FB3). In the KES testing, the compression resilience, work of compression, linearity of compression and other parameters were calculated from the pressure-thickness curves. Analysis of Variance (ANOVA) was performed using new statistical software named QC expert trilobite and Darwin in order to compare the influence of different fabric parameters on thermo-physiological and compression behavior of samples. This study established that the raw materials, type of spacer yarn, density, thickness and tightness of surface layer have significant influence on both thermal conductivity and work of compression in spacer fabrics. The parameter which mainly influence on the water vapor permeability of these fabrics is the properties of raw material i.e. the wetting and wicking properties of fibers. The Pearson correlation between moisture capacity of the fabrics and water vapour permeability was found using statistical software named QC expert trilobite and Darwin. These findings are important requirements for the further designing of clothing for extreme environmental conditions.

Keywords: 3D spacer fabrics, thermal conductivity, moisture management, work of compression (WC), resilience of compression (RC)

Procedia PDF Downloads 527
777 Mapping of Geological Structures Using Aerial Photography

Authors: Ankit Sharma, Mudit Sachan, Anurag Prakash

Abstract:

Rapid growth in data acquisition technologies through drones, have led to advances and interests in collecting high-resolution images of geological fields. Being advantageous in capturing high volume of data in short flights, a number of challenges have to overcome for efficient analysis of this data, especially while data acquisition, image interpretation and processing. We introduce a method that allows effective mapping of geological fields using photogrammetric data of surfaces, drainage area, water bodies etc, which will be captured by airborne vehicles like UAVs, we are not taking satellite images because of problems in adequate resolution, time when it is captured may be 1 yr back, availability problem, difficult to capture exact image, then night vision etc. This method includes advanced automated image interpretation technology and human data interaction to model structures and. First Geological structures will be detected from the primary photographic dataset and the equivalent three dimensional structures would then be identified by digital elevation model. We can calculate dip and its direction by using the above information. The structural map will be generated by adopting a specified methodology starting from choosing the appropriate camera, camera’s mounting system, UAVs design ( based on the area and application), Challenge in air borne systems like Errors in image orientation, payload problem, mosaicing and geo referencing and registering of different images to applying DEM. The paper shows the potential of using our method for accurate and efficient modeling of geological structures, capture particularly from remote, of inaccessible and hazardous sites.

Keywords: digital elevation model, mapping, photogrammetric data analysis, geological structures

Procedia PDF Downloads 670