Search results for: energy modeling tools
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 15332

Search results for: energy modeling tools

3002 The Study of Solar Activity during Sun Eclipse and Its Relation to Earthquake

Authors: Hanieh Sadat Jannesari. Rahelehossadat Abtahi, Kourosh Bamzadeh, Alireza Nadimi

Abstract:

The earthquake is one of the most devastating natural hazards, in which hundreds of thousands have lost their lives as a result of it. So far, experts have tried to use precursors to identify the earthquake before it occurs in order to alert and save people, a part of which relates to solar activity and earthquakes. The purpose of this article is to investigate solar activity during the solar eclipse as a precursor to pre-earthquake awareness. Information from this article is derived from the Influences and USGS Daily Data Center. During solar activity, electric interactions between the solar wind and the celestial bodies are formed, and then gravitational lenses are formed. If, during this event, there is also an eclipse, the dispersed waves in space (in accordance with the theory of general relativity of Einstein) in contact with plasma-gravitational lenses in space will move in a straight line toward the earth. In addition to forming the focal point, these gravitational lenses reflect the source image either at their focal length or farther away. The image reflected in the earth by ionized particles in the form of energy transmission lines can cause material collapse and earthquakes. In this study, the correlation between solar winds and the celestial bodies during the solar eclipse is about 76% of the location of large earthquakes.

Keywords: earthquake, plasma-gravitational lens, solar eclipse, solar spots

Procedia PDF Downloads 37
3001 Application of the Micropolar Beam Theory for the Construction of the Discrete-Continual Model of Carbon Nanotubes

Authors: Samvel H. Sargsyan

Abstract:

Together with the study of electron-optical properties of nanostructures and proceeding from experiment-based data, the study of the mechanical properties of nanostructures has become quite actual. For the study of the mechanical properties of fullerene, carbon nanotubes, graphene and other nanostructures one of the crucial issues is the construction of their adequate mathematical models. Among all mathematical models of graphene or carbon nano-tubes, this so-called discrete-continuous model is specifically important. It substitutes the interactions between atoms by elastic beams or springs. The present paper demonstrates the construction of the discrete-continual beam model for carbon nanotubes or graphene, where the micropolar beam model based on the theory of moment elasticity is accepted. With the account of the energy balance principle, the elastic moment constants for the beam model, expressed by the physical and geometrical parameters of carbon nanotube or graphene, are determined. By switching from discrete-continual beam model to the continual, the models of micropolar elastic cylindrical shell and micropolar elastic plate are confirmed as continual models for carbon nanotube and graphene respectively.

Keywords: carbon nanotube, discrete-continual, elastic, graphene, micropolar, plate, shell

Procedia PDF Downloads 162
3000 The Life Skills Project: Client-Centered Approaches to Life Skills Acquisition for Homeless and At-Risk Populations

Authors: Leah Burton, Sara Cumming, Julianne DiSanto

Abstract:

Homelessness is a widespread and complex problem in Canada and around the globe. Many Canadians will face homelessness at least once in their lifetime, with several experiencing subsequent bouts or cyclical patterns of housing precarity. While a Housing First approach to homelessness is a long-standing and widely accepted best practice, it is also recognized that the acquisition of life skills is an effective way to reduce cycles of homelessness. Indeed, when individuals are provided with a range of life skills—such as (but not limited to) financial literacy, household management, interpersonal skills, critical thinking, and resource management—they are given the tools required to maintain long-term Housing for a lifetime; thus reducing a repetitive need for services. However, there is limited research regarding the best ways to teach life skills, a problem that has been further complicated in a post-pandemic world, where services are being delivered online or in a hybrid model of care. More than this, it is difficult to provide life skills on a large scale without losing a client-centered approach to services. This lack of client-centeredness is also seen in the lack of attention to culturally sensitive life skills, which consider the diverse needs of individuals and imbed equity, diversity, and inclusion (EDI) within the skills being taught. This study aims to fill these identified gaps in the literature by employing a community-engaged (CER) approach. Academic, government, funders, front-line staff, and clients at 15 not-for-profits from across the Greater Toronto Area in Ontario, Canada, collaborated to co-create a virtual, client-centric, EDI-informed life skill learning management system. A triangulation methodology was utilized for this research. An environmental scan was conducted for current best practices, and over 100 front-line staff (including workers, managers, and executive directors who work with homeless populations) participated in two separate Creative Problem Solving Sessions. Over 200 individuals with experience in homelessness completed quantitative and open-ended surveys. All sections of this research aimed to discover the areas of skills that individuals need to maintain Housing and to ascertain what a more client-driven EDI approach to life skills training should include. This presentation will showcase the findings on which life skills are deemed essential for homeless and precariously housed individuals.

Keywords: homelessness, housing first, life skills, community engaged research, client- centered

Procedia PDF Downloads 105
2999 Single Mothers by Choice at Corona Time - The Perception of Social Support, Happiness and Work-Family Conflict and their Effect on State Anxiety

Authors: Orit Shamir Balderman, Shamir Michal

Abstract:

Israel often deals with crisis situations, but most have been characterized as security crises (e.g., war). This is the first time that the Israel has dealt with a health and social emergency as part of a global crisis. The crisis began in January 2020 with the emergence of the novel coronavirus (Covid-19), which was defined as a pandemic (World Health Organization, 2020) and arrived in Israel in early March 2020. This study examined how single mothers by choice (SMBC) experience state anxiety (SA), social support, work–family conflict (WFC), and happiness. This group has not been studied in the context of crises in general or a global crisis. Using a snowball sample, 386 SMBCanswered an online questionnaire. The findings show a negative relationship between income and level of state anxiety. State anxiety was also negatively associated with social support, level of happiness, and WFC. Finally, a stepwise regression analysis indicated that happiness explained 34% of the variance in SA. We also found that most of the women did not turn to formal support agencies such as social workers, other Government Ministries, or municipal welfare. A positive and strong correlations was also found between SA and WFC. The findings of the study reinforce the understanding that although these women made a conscious and informed decision regarding the choice of their family cell, their situation is more complex in the absence of a spouse support. Therefore, this study, as other future studies in the field of SMBC, may contribute to the improvement of their social status and the understanding that they are a unique group. Although SMBC are a growing sector of society in the past few years, there are still special needs and special attention that is needed from the formal and informal supports systems. A comparative study of these two groups and in different countries would shed light on SA among mothers in general, regardless of their relationship status and location.Researchers should expand this study by comparing mothers in relationships and exploring how SMBC coped in other countries. In summary, the findings of the study contribute knowledge on three levels: (a) knowledge about SMBC in general and during crisis situations; (b) examination of social support using tools assessing receipt of assistance and support, some of which were developed for the present study; and (c) insights regarding counseling, accompaniment, and guidance of welfare mechanisms.

Keywords: single mothers by choice, state anxiety, social support, happiness, work–family conflict

Procedia PDF Downloads 90
2998 Land Degradation Vulnerability Modeling: A Study on Selected Micro Watersheds of West Khasi Hills Meghalaya, India

Authors: Amritee Bora, B. S. Mipun

Abstract:

Land degradation is often used to describe the land environmental phenomena that reduce land’s original productivity both qualitatively and quantitatively. The study of land degradation vulnerability primarily deals with “Environmentally Sensitive Areas” (ESA) and the amount of topsoil loss due to erosion. In many studies, it is observed that the assessment of the existing status of land degradation is used to represent the vulnerability. Moreover, it is also noticed that in most studies, the primary emphasis of land degradation vulnerability is to assess its sensitivity to soil erosion only. However, the concept of land degradation vulnerability can have different objectives depending upon the perspective of the study. It shows the extent to which changes in land use land cover can imprint their effect on the land. In other words, it represents the susceptibility of a piece of land to degrade its productive quality permanently or in the long run. It is also important to mention that the vulnerability of land degradation is not a single factor outcome. It is a probability assessment to evaluate the status of land degradation and needs to consider both biophysical and human induce parameters. To avoid the complexity of the previous models in this regard, the present study has emphasized on to generate a simplified model to assess the land degradation vulnerability in terms of its current human population pressure, land use practices, and existing biophysical conditions. It is a “Mixed-Method” termed as the land degradation vulnerability index (LDVi). It was originally inspired by the MEDALUS model (Mediterranean Desertification and Land Use), 1999, and Farazadeh’s 2007 revised version of it. It has followed the guidelines of Space Application Center, Ahmedabad / Indian Space Research Organization for land degradation vulnerability. The model integrates the climatic index (Ci), vegetation index (Vi), erosion index (Ei), land utilization index (Li), population pressure index (Pi), and cover management index (CMi) by giving equal weightage to each parameter. The final result shows that the very high vulnerable zone primarily indicates three (3) prominent circumstances; land under continuous population pressure, high concentration of human settlement, and high amount of topsoil loss due to surface runoff within the study sites. As all the parameters of the model are amalgamated with equal weightage further with the help of regression analysis, the LDVi model also provides a strong grasp of each parameter and how far they are competent to trigger the land degradation process.

Keywords: population pressure, land utilization, soil erosion, land degradation vulnerability

Procedia PDF Downloads 171
2997 An Integrative Computational Pipeline for Detection of Tumor Epitopes in Cancer Patients

Authors: Tanushree Jaitly, Shailendra Gupta, Leila Taher, Gerold Schuler, Julio Vera

Abstract:

Genomics-based personalized medicine is a promising approach to fight aggressive tumors based on patient's specific tumor mutation and expression profiles. A remarkable case is, dendritic cell-based immunotherapy, in which tumor epitopes targeting patient's specific mutations are used to design a vaccine that helps in stimulating cytotoxic T cell mediated anticancer immunity. Here we present a computational pipeline for epitope-based personalized cancer vaccines using patient-specific haplotype and cancer mutation profiles. In the workflow proposed, we analyze Whole Exome Sequencing and RNA Sequencing patient data to detect patient-specific mutations and their expression level. Epitopes including the tumor mutations are computationally predicted using patient's haplotype and filtered based on their expression level, binding affinity, and immunogenicity. We calculate binding energy for each filtered major histocompatibility complex (MHC)-peptide complex using docking studies, and use this feature to select good epitope candidates further.

Keywords: cancer immunotherapy, epitope prediction, NGS data, personalized medicine

Procedia PDF Downloads 259
2996 SAFECARE: Integrated Cyber-Physical Security Solution for Healthcare Critical Infrastructure

Authors: Francesco Lubrano, Fabrizio Bertone, Federico Stirano

Abstract:

Modern societies strongly depend on Critical Infrastructures (CI). Hospitals, power supplies, water supplies, telecommunications are just few examples of CIs that provide vital functions to societies. CIs like hospitals are very complex environments, characterized by a huge number of cyber and physical systems that are becoming increasingly integrated. Ensuring a high level of security within such critical infrastructure requires a deep knowledge of vulnerabilities, threats, and potential attacks that may occur, as well as defence and prevention or mitigation strategies. The possibility to remotely monitor and control almost everything is pushing the adoption of network-connected devices. This implicitly introduces new threats and potential vulnerabilities, posing a risk, especially to those devices connected to the Internet. Modern medical devices used in hospitals are not an exception and are more and more being connected to enhance their functionalities and easing the management. Moreover, hospitals are environments with high flows of people, that are difficult to monitor and can somehow easily have access to the same places used by the staff, potentially creating damages. It is therefore clear that physical and cyber threats should be considered, analysed, and treated together as cyber-physical threats. This means that an integrated approach is required. SAFECARE, an integrated cyber-physical security solution, tries to respond to the presented issues within healthcare infrastructures. The challenge is to bring together the most advanced technologies from the physical and cyber security spheres, to achieve a global optimum for systemic security and for the management of combined cyber and physical threats and incidents and their interconnections. Moreover, potential impacts and cascading effects are evaluated through impact propagation models that rely on modular ontologies and a rule-based engine. Indeed, SAFECARE architecture foresees i) a macroblock related to cyber security field, where innovative tools are deployed to monitor network traffic, systems and medical devices; ii) a physical security macroblock, where video management systems are coupled with access control management, building management systems and innovative AI algorithms to detect behavior anomalies; iii) an integration system that collects all the incoming incidents, simulating their potential cascading effects, providing alerts and updated information regarding assets availability.

Keywords: cyber security, defence strategies, impact propagation, integrated security, physical security

Procedia PDF Downloads 169
2995 The Impact of the Mastering My Mental Fitness™-Nurses Workshops on Practical Nursing Students’ Perceived Burnout and Psychological Capital: An Embedded Mixed Methods Study

Authors: Linda Frost, Lindsay Anderson, Jana Borras, Ariel Dysangco, Vimabayi Makwaira

Abstract:

The academic environment in which nursing students are immersed in comes with many demands and expectations. Course load, clinical placements, and financial expenses are examples of the pressures facing students each semester. These pressures contribute to student stress and impact their overall well-being and mental fitness. Students' ability to cope with stress and bounce back from adversity is enhanced when we build their mental fitness. Building mental fitness has the benefit of improving physical health, relationships, self-esteem, resilience, work productivity, and overall contentment, happiness and life satisfaction. While self-care is encouraged to avoid burnout, there is a gap in literature on programs to help build nursing students’ mental health and ability to engage in self-care. There is an opportunity and a need to design programs and implement actions aimed at reducing stress and its adverse effects on nursing students. Nursing students require the support of people who understand the complexities of the nursing profession, multifaceted work environments in which they operate, and the impact these environments have on their mental fitness. Nursing academia is in the best position to ensure that tools are in place to support the next generation of nurses who face a career with significant emotional and physical demands. This is a mixed-method study using an embedded design. We utilized a pretest-posttest design to compare the difference in psychological capital (PsyCap) and burnout in students who have received the Mastering My Mental Fitness-Nurses™ (MMMF-N™) workshops (n=8) and the control group (n=9) who have not. Semi structured interviews were conducted with the eight nursing students in the intervention group, along with data from feedback forms to explore the impact of the workshops on student’s burnout and PsyCap and determine how to improve the workshops for future students. The quantitative and qualitative data will be merged using a side-by-side comparison. This will be in a discussion format that allows for the comparison of the results from both phases. The findings will be available January 2025. We anticipate that students in the control and intervention group will report similar levels of burnout. As well, students in the intervention group will indicate the benefits of the MMMF-N™ workshops through qualitative interviews and workshop feedback forms.

Keywords: burnout, mental fitness, nursing students, psychological capital

Procedia PDF Downloads 39
2994 Algorithm Development of Individual Lumped Parameter Modelling for Blood Circulatory System: An Optimization Study

Authors: Bao Li, Aike Qiao, Gaoyang Li, Youjun Liu

Abstract:

Background: Lumped parameter model (LPM) is a common numerical model for hemodynamic calculation. LPM uses circuit elements to simulate the human blood circulatory system. Physiological indicators and characteristics can be acquired through the model. However, due to the different physiological indicators of each individual, parameters in LPM should be personalized in order for convincing calculated results, which can reflect the individual physiological information. This study aimed to develop an automatic and effective optimization method to personalize the parameters in LPM of the blood circulatory system, which is of great significance to the numerical simulation of individual hemodynamics. Methods: A closed-loop LPM of the human blood circulatory system that is applicable for most persons were established based on the anatomical structures and physiological parameters. The patient-specific physiological data of 5 volunteers were non-invasively collected as personalized objectives of individual LPM. In this study, the blood pressure and flow rate of heart, brain, and limbs were the main concerns. The collected systolic blood pressure, diastolic blood pressure, cardiac output, and heart rate were set as objective data, and the waveforms of carotid artery flow and ankle pressure were set as objective waveforms. Aiming at the collected data and waveforms, sensitivity analysis of each parameter in LPM was conducted to determine the sensitive parameters that have an obvious influence on the objectives. Simulated annealing was adopted to iteratively optimize the sensitive parameters, and the objective function during optimization was the root mean square error between the collected waveforms and data and simulated waveforms and data. Each parameter in LPM was optimized 500 times. Results: In this study, the sensitive parameters in LPM were optimized according to the collected data of 5 individuals. Results show a slight error between collected and simulated data. The average relative root mean square error of all optimization objectives of 5 samples were 2.21%, 3.59%, 4.75%, 4.24%, and 3.56%, respectively. Conclusions: Slight error demonstrated good effects of optimization. The individual modeling algorithm developed in this study can effectively achieve the individualization of LPM for the blood circulatory system. LPM with individual parameters can output the individual physiological indicators after optimization, which are applicable for the numerical simulation of patient-specific hemodynamics.

Keywords: blood circulatory system, individual physiological indicators, lumped parameter model, optimization algorithm

Procedia PDF Downloads 141
2993 Human Resource Management Functions; Employee Performance; Professional Health Workers In Public District Hospitals

Authors: Benjamin Mugisha Bugingo

Abstract:

Healthcare staffhas been considered as asignificant pillar to the health care system. However, the contest of human resources for health in terms of the turnover of health workers in Uganda has been more distinct in the latest years. The objective of the paper, therefore, were to investigate the influence Role Human resource management functions in on employeeperformance of professional health workers in public district hospitals in Kampala. The study objectives were: to establish the effect of performance management function, financialincentives, non-financial incentives, participation, and involvement in the decision-making on the employee performance of professional health workers in public district hospitals in Kampala. The study was devised in the social exchange theory and the equity theory. This study adopted a descriptive research design using quantitative approaches. The study used a cross-sectional research design with a mixed-methods approach. With a population of 402 individuals, the study considered a sample of 252 respondents, including doctors, nurses, midwives, pharmacists, and dentists from 3 district hospitals. The study instruments entailed a questionnaire as a quantitative data collection tool and interviews and focus group discussions as qualitative data gathering tools. To analyze quantitative data, descriptive statistics were used to assess the perceived status of Human resource management functions and the magnitude of intentions to stay, and inferential statistics were used to show the effect of predictors on the outcome variable by plotting a multiple linear regression. Qualitative data were analyzed in themes and reported in narrative and verbatim quotes and were used to complement descriptive findings for a better understanding of the magnitude of the study variables. The findings of this study showed a significant and positive effect of performance management function, financialincentives, non-financial incentives, and participation and involvement in decision-making on employee performance of professional health workers in public district hospitals in Kampala. This study is expected to be a major contributor for the improvement of the health system in the country and other similar settings as it has provided the insights for strategic orientation in the area of human resources for health, especially for enhanced employee performance in relation with the integrated human resource management approach

Keywords: human resource functions, employee performance, employee wellness, profecial workers

Procedia PDF Downloads 104
2992 Cancer Burden and Policy Needs in the Democratic Republic of the Congo: A Descriptive Study

Authors: Jean Paul Muambangu Milambo, Peter Nyasulu, John Akudugu, Leonidas Ndayisaba, Joyce Tsoka-Gwegweni, Lebwaze Massamba Bienvenu, Mitshindo Mwambangu Chiro

Abstract:

In 2018, non-communicable diseases (NCDs) were responsible for 48% of deaths in the Democratic Republic of Congo (DRC), with cancer contributing to 5% of these deaths. There is a notable absence of cancer registries, capacity-building activities, budgets, and treatment roadmaps in the DRC. Current cancer estimates are primarily based on mathematical modeling with limited data from neighboring countries. This study aimed to assess cancer subtype prevalence in Kinshasa hospitals and compare these findings with WHO model estimates. Methods: A retrospective observational study was conducted from 2018 to 2020 at HJ Hospitals in Kinshasa. Data were collected using American Cancer Society (ACS) questionnaires and physician logs. Descriptive analysis was performed using STATA version 16 to estimate cancer burden and provide evidence-based recommendations. Results: The results from the chart review at HJ Hospitals in Kinshasa (2018-2020) indicate that out of 6,852 samples, approximately 11.16% were diagnosed with cancer. The distribution of cancer subtypes in this cohort was as follows: breast cancer (33.6%), prostate cancer (21.8%), colorectal cancer (9.6%), lymphoma (4.6%), and cervical cancer (4.4%). These figures are based on histopathological confirmation at the facility and may not fully represent the broader population due to potential selection biases related to geographic and financial accessibility to the hospital. In contrast, the World Health Organization (WHO) model estimates for cancer prevalence in the DRC show different proportions. According to WHO data, the distribution of cancer types is as follows: cervical cancer (15.9%), prostate cancer (15.3%), breast cancer (14.9%), liver cancer (6.8%), colorectal cancer (5.9%), and other cancers (41.2%) (WHO, 2020). Conclusion: The data indicate a rising cancer prevalence in DRC but highlight significant gaps in clinical, biomedical, and genetic cancer data. The establishment of a population-based cancer registry (PBCR) and a defined cancer management pathway is crucial. The current estimates are limited due to data scarcity and inconsistencies in clinical practices. There is an urgent need for multidisciplinary cancer management, integration of palliative care, and improvement in care quality based on evidence-based measures.

Keywords: cancer, risk factors, DRC, gene-environment interactions, survivors

Procedia PDF Downloads 25
2991 Control Flow around NACA 4415 Airfoil Using Slot and Injection

Authors: Imine Zakaria, Meftah Sidi Mohamed El Amine

Abstract:

One of the most vital aerodynamic organs of a flying machine is the wing, which allows it to fly in the air efficiently. The flow around the wing is very sensitive to changes in the angle of attack. Beyond a value, there is a phenomenon of the boundary layer separation on the upper surface, which causes instability and total degradation of aerodynamic performance called a stall. However, controlling flow around an airfoil has become a researcher concern in the aeronautics field. There are two techniques for controlling flow around a wing to improve its aerodynamic performance: passive and active controls. Blowing and suction are among the active techniques that control the boundary layer separation around an airfoil. Their objective is to give energy to the air particles in the boundary layer separation zones and to create vortex structures that will homogenize the velocity near the wall and allow control. Blowing and suction have long been used as flow control actuators around obstacles. In 1904 Prandtl applied a permanent blowing to a cylinder to delay the boundary layer separation. In the present study, several numerical investigations have been developed to predict a turbulent flow around an aerodynamic profile. CFD code was used for several angles of attack in order to validate the present work with that of the literature in the case of a clean profile. The variation of the lift coefficient CL with the momentum coefficient

Keywords: CFD, control flow, lift, slot

Procedia PDF Downloads 205
2990 TARF: Web Toolkit for Annotating RNA-Related Genomic Features

Authors: Jialin Ma, Jia Meng

Abstract:

Genomic features, the genome-based coordinates, are commonly used for the representation of biological features such as genes, RNA transcripts and transcription factor binding sites. For the analysis of RNA-related genomic features, such as RNA modification sites, a common task is to correlate these features with transcript components (5'UTR, CDS, 3'UTR) to explore their distribution characteristics in terms of transcriptomic coordinates, e.g., to examine whether a specific type of biological feature is enriched near transcription start sites. Existing approaches for performing these tasks involve the manipulation of a gene database, conversion from genome-based coordinate to transcript-based coordinate, and visualization methods that are capable of showing RNA transcript components and distribution of the features. These steps are complicated and time consuming, and this is especially true for researchers who are not familiar with relevant tools. To overcome this obstacle, we develop a dedicated web app TARF, which represents web toolkit for annotating RNA-related genomic features. TARF web tool intends to provide a web-based way to easily annotate and visualize RNA-related genomic features. Once a user has uploaded the features with BED format and specified a built-in transcript database or uploaded a customized gene database with GTF format, the tool could fulfill its three main functions. First, it adds annotation on gene and RNA transcript components. For every features provided by the user, the overlapping with RNA transcript components are identified, and the information is combined in one table which is available for copy and download. Summary statistics about ambiguous belongings are also carried out. Second, the tool provides a convenient visualization method of the features on single gene/transcript level. For the selected gene, the tool shows the features with gene model on genome-based view, and also maps the features to transcript-based coordinate and show the distribution against one single spliced RNA transcript. Third, a global transcriptomic view of the genomic features is generated utilizing the Guitar R/Bioconductor package. The distribution of features on RNA transcripts are normalized with respect to RNA transcript landmarks and the enrichment of the features on different RNA transcript components is demonstrated. We tested the newly developed TARF toolkit with 3 different types of genomics features related to chromatin H3K4me3, RNA N6-methyladenosine (m6A) and RNA 5-methylcytosine (m5C), which are obtained from ChIP-Seq, MeRIP-Seq and RNA BS-Seq data, respectively. TARF successfully revealed their respective distribution characteristics, i.e. H3K4me3, m6A and m5C are enriched near transcription starting sites, stop codons and 5’UTRs, respectively. Overall, TARF is a useful web toolkit for annotation and visualization of RNA-related genomic features, and should help simplify the analysis of various RNA-related genomic features, especially those related RNA modifications.

Keywords: RNA-related genomic features, annotation, visualization, web server

Procedia PDF Downloads 212
2989 Characterization of Nanostructured and Conventional TiAlN and AlCrN Coated ASTM-SA213-T-11 Boiler Steel

Authors: Vikas Chawla, Buta Singh Sidhu, Amita Rani, Amit Handa

Abstract:

The main objective of the present work is microstructural and mechanical characterization of the conventional and nanostructured TiAlN and AlCrN coatings deposited on T-11 boiler steel. In case of conventional coatings, Al-Cr and Ti-Al metallic powders were deposited using plasma spray process followed by gas nitriding of the surface which was done in the lab with optimized parameters after conducting several trials on plasma-sprayed coated specimens. The physical vapor deposition process (PAPVD) was employed for depositing nanostructured TiAlN and AlCrN coatings. The field emission scanning electron microscopy (FE-SEM) with energy dispersive X-ray analysis (EDAX) attachment, X-ray diffraction (XRD) analysis, atomic force microscopy (AFM) analysis and the X-Ray mapping analysis techniques have been used to study surface and cross-sectional morphology of the coatings. The surface roughness and micro-hardness were also measured. A good adhesion of the conventional thick TiAlN and AlCrN coatings was found. The coatings under study are recommended for the applications to super-heater and re-heater tubes of the boilers based upon the outcomes of the research work.

Keywords: nanostructure, physical vapour deposition, oxides, thin films, electron microscopy

Procedia PDF Downloads 143
2988 Preparation and Characterization of Modified ZnO Incorporated into Mesoporous MCM-22 Catalysts and Their Catalytic Performances of Crude Jatropha Oil to Biodiesel

Authors: Bashir Abubakar Abdulkadir, Anita Ramli, Lim Jun Wei, Yoshimitsu Uemura

Abstract:

In this study, the ZnO/MCM-22 catalyst with different ZnO loading were prepared using conventional wet impregnation process and the catalyst activity was tested for biodiesel production from Jatropha oil. The effects of reaction parameters with regards to catalyst activity were investigated. The synthesized catalysts samples were then characterized by X-ray diffraction (XRD) for crystal phase, Brunauer–Emmett–Teller (BET) for surface area, pore volume and pore size, Field Emission Scanning electron microscope attached to energy dispersive x-ray (FESEM/EDX) for morphology and elemental composition and TPD (NH3 and CO2) for basic and acidic properties of the catalyst. The XRD spectra couple with the EDX result shows the presence of ZnO in the catalyst confirming the positive intercalation of the metal oxide into the mesoporous MCM-22. The synthesized catalyst was confirmed to be mesoporous according to BET findings. Also, the catalysts can be considered as a bifunctional catalyst based on TPD outcomes. Transesterification results showed that the synthesized catalyst was highly efficient and effective to be used for biodiesel production from low grade oil such as Jatropha oil and other industrial application where the high fatty acid methyl ester (FAMEs) yield was achieved at moderate reaction conditions. It was also discovered that the catalyst can be used more than five (5) runs with little deactivation confirming the catalyst to be highly active and stable to the heat of reaction.

Keywords: MCM-22, synthesis, transesterification, ZnO

Procedia PDF Downloads 214
2987 Influence of Annealing Temperature on Optical, Anticandidal, Photocatalytic and Dielectric Properties of ZnO/TiO2 Nanocomposites

Authors: Wasi Khan, Suboohi Shervani, Swaleha Naseem, Mohd. Shoeb, J. A. Khan, B. R. Singh, A. H. Naqvi

Abstract:

We have successfully synthesized ZnO/TiO2 nanocomposite using a two-step solochemical synthesis method. The influence of annealing temperature on microstructural, optical, anticandidal, photocatalytic activities and dielectric properties were investigated. X-ray diffraction (XRD) and scanning electron microscopy (SEM) show the formation of nanocomposite and uniform surface morphology of all samples. The UV-Vis spectra indicate decrease in band gap energy with increase in annealing temperature. The anticandidal activity of ZnO/TiO2 nanocomposite was evaluated against MDR C. albicans 077. The in-vitro killing assay revealed that the ZnO/TiO2 nanocomposite efficiently inhibit the growth of the C. albicans 077. The nanocomposite also exhibited the photocatalytic activity for the degradation of methyl orange as a function of time at 465 nm wavelength. The electrical behaviour of composite has been studied over a wide range of frequencies at room temperature using complex impedance spectroscopy. The dielectric constants, dielectric loss and ac conductivity (σac) were studied as the function of frequency, which have been explained by ‘Maxwell Wagner Model’. The data reveals that the dielectric constant and loss (tanδ) exhibit the normal dielectric behavior and decreases with the increase in frequency.

Keywords: ZnO/TiO2 nanocomposites, SEM, photocatalytic activity, dielectric properties

Procedia PDF Downloads 410
2986 Demographic Shrinkage and Reshaping Regional Policy of Lithuania in Economic Geographic Context

Authors: Eduardas Spiriajevas

Abstract:

Since the end of the 20th century, when Lithuania regained its independence, a process of demographic shrinkage started. Recently, it affects the efficiency of implementation of actions related to regional development policy and geographic scopes of created value added in the regions. The demographic structures of human resources reflect onto the regions and their economic geographic environment. Due to reshaping economies and state reforms on restructuration of economic branches such as agriculture and industry, it affects the economic significance of services’ sector. These processes influence the competitiveness of labor market and its demographic characteristics. Such vivid consequences are appropriate for the structures of human migrations, which affected the processes of demographic ageing of human resources in the regions, especially in peripheral ones. These phenomena of modern times induce the demographic shrinkage of society and its economic geographic characteristics in the actions of regional development and in regional policy. The internal and external migrations of population captured numerous regional economic disparities, and influenced on territorial density and concentration of population of the country and created the economies of spatial unevenness in such small geographically compact country as Lithuania. The processes of territorial reshaping of distribution of population create new regions and their economic environment, which is not corresponding to the main principles of regional policy and its power to create the well-being and to promote the attractiveness for economic development. These are the new challenges of national regional policy and it should be researched in a systematic way of taking into consideration the analytical approaches of regional economy in the context of economic geographic research methods. A comparative territorial analysis according to administrative division of Lithuania in relation to retrospective approach and introduction of method of location quotients, both give the results of economic geographic character with cartographic representations using the tools of spatial analysis provided by technologies of Geographic Information Systems. A set of these research methods provide the new spatially evidenced based results, which must be taken into consideration in reshaping of national regional policy in economic geographic context. Due to demographic shrinkage and increasing differentiation of economic developments within the regions, an input of economic geographic dimension is inevitable. In order to sustain territorial balanced economic development, there is a need to strengthen the roles of regional centers (towns) and to empower them with new economic functionalities for revitalization of peripheral regions, and to increase their economic competitiveness and social capacities on national scale.

Keywords: demographic shrinkage, economic geography, Lithuania, regions

Procedia PDF Downloads 165
2985 Application of Deep Neural Networks to Assess Corporate Credit Rating

Authors: Parisa Golbayani, Dan Wang, Ionut¸ Florescu

Abstract:

In this work we implement machine learning techniques to financial statement reports in order to asses company’s credit rating. Specifically, the work analyzes the performance of four neural network architectures (MLP, CNN, CNN2D, LSTM) in predicting corporate credit rating as issued by Standard and Poor’s. The paper focuses on companies from the energy, financial, and healthcare sectors in the US. The goal of this analysis is to improve application of machine learning algorithms to credit assessment. To accomplish this, the study investigates three questions. First, we investigate if the algorithms perform better when using a selected subset of important features or whether better performance is obtained by allowing the algorithms to select features themselves. Second, we address the temporal aspect inherent in financial data and study whether it is important for the results obtained by a machine learning algorithm. Third, we aim to answer if one of the four particular neural network architectures considered consistently outperforms the others, and if so under which conditions. This work frames the problem as several case studies to answer these questions and analyze the results using ANOVA and multiple comparison testing procedures.

Keywords: convolutional neural network, long short term memory, multilayer perceptron, credit rating

Procedia PDF Downloads 240
2984 Advanced Hybrid Particle Swarm Optimization for Congestion and Power Loss Reduction in Distribution Networks with High Distributed Generation Penetration through Network Reconfiguration

Authors: C. Iraklis, G. Evmiridis, A. Iraklis

Abstract:

Renewable energy sources and distributed power generation units already have an important role in electrical power generation. A mixture of different technologies penetrating the electrical grid, adds complexity in the management of distribution networks. High penetration of distributed power generation units creates node over-voltages, huge power losses, unreliable power management, reverse power flow and congestion. This paper presents an optimization algorithm capable of reducing congestion and power losses, both described as a function of weighted sum. Two factors that describe congestion are being proposed. An upgraded selective particle swarm optimization algorithm (SPSO) is used as a solution tool focusing on the technique of network reconfiguration. The upgraded SPSO algorithm is achieved with the addition of a heuristic algorithm specializing in reduction of power losses, with several scenarios being tested. Results show significant improvement in minimization of losses and congestion while achieving very small calculation times.

Keywords: congestion, distribution networks, loss reduction, particle swarm optimization, smart grid

Procedia PDF Downloads 450
2983 Power Quality Improvement Using UPQC Integrated with Distributed Generation Network

Authors: B. Gopal, Pannala Krishna Murthy, G. N. Sreenivas

Abstract:

The increasing demand of electric power is giving an emphasis on the need for the maximum utilization of renewable energy sources. On the other hand maintaining power quality to satisfaction of utility is an essential requirement. In this paper the design aspects of a Unified Power Quality Conditioner integrated with photovoltaic system in a distributed generation is presented. The proposed system consist of series inverter, shunt inverter are connected back to back on the dc side and share a common dc-link capacitor with Distributed Generation through a boost converter. The primary task of UPQC is to minimize grid voltage and load current disturbances along with reactive and harmonic power compensation. In addition to primary tasks of UPQC, other functionalities such as compensation of voltage interruption and active power transfer to the load and grid in both islanding and interconnected mode have been addressed. The simulation model is design in MATLAB/ Simulation environment and the results are in good agreement with the published work.

Keywords: distributed generation (DG), interconnected mode, islanding mode, maximum power point tracking (mppt), power quality (PQ), unified power quality conditioner (UPQC), photovoltaic array (PV)

Procedia PDF Downloads 512
2982 Multi Tier Data Collection and Estimation, Utilizing Queue Model in Wireless Sensor Networks

Authors: Amirhossein Mohajerzadeh, Abolghasem Mohajerzadeh

Abstract:

In this paper, target parameter is estimated with desirable precision in hierarchical wireless sensor networks (WSN) while the proposed algorithm also tries to prolong network lifetime as much as possible, using efficient data collecting algorithm. Target parameter distribution function is considered unknown. Sensor nodes sense the environment and send the data to the base station called fusion center (FC) using hierarchical data collecting algorithm. FC builds underlying phenomena based on collected data. Considering the aggregation level, x, the goal is providing the essential infrastructure to find the best value for aggregation level in order to prolong network lifetime as much as possible, while desirable accuracy is guaranteed (required sample size is fully depended on desirable precision). First, the sample size calculation algorithm is discussed, second, the average queue length based on M/M[x]/1/K queue model is determined and it is used for energy consumption calculation. Nodes can decrease transmission cost by aggregating incoming data. Furthermore, the performance of the new algorithm is evaluated in terms of lifetime and estimation accuracy.

Keywords: aggregation, estimation, queuing, wireless sensor network

Procedia PDF Downloads 188
2981 Virtual Metrology for Copper Clad Laminate Manufacturing

Authors: Misuk Kim, Seokho Kang, Jehyuk Lee, Hyunchang Cho, Sungzoon Cho

Abstract:

In semiconductor manufacturing, virtual metrology (VM) refers to methods to predict properties of a wafer based on machine parameters and sensor data of the production equipment, without performing the (costly) physical measurement of the wafer properties (Wikipedia). Additional benefits include avoidance of human bias and identification of important factors affecting the quality of the process which allow improving the process quality in the future. It is however rare to find VM applied to other areas of manufacturing. In this work, we propose to use VM to copper clad laminate (CCL) manufacturing. CCL is a core element of a printed circuit board (PCB) which is used in smartphones, tablets, digital cameras, and laptop computers. The manufacturing of CCL consists of three processes: Treating, lay-up, and pressing. Treating, the most important process among the three, puts resin on glass cloth, heat up in a drying oven, then produces prepreg for lay-up process. In this process, three important quality factors are inspected: Treated weight (T/W), Minimum Viscosity (M/V), and Gel Time (G/T). They are manually inspected, incurring heavy cost in terms of time and money, which makes it a good candidate for VM application. We developed prediction models of the three quality factors T/W, M/V, and G/T, respectively, with process variables, raw material, and environment variables. The actual process data was obtained from a CCL manufacturer. A variety of variable selection methods and learning algorithms were employed to find the best prediction model. We obtained prediction models of M/V and G/T with a high enough accuracy. They also provided us with information on “important” predictor variables, some of which the process engineers had been already aware and the rest of which they had not. They were quite excited to find new insights that the model revealed and set out to do further analysis on them to gain process control implications. T/W did not turn out to be possible to predict with a reasonable accuracy with given factors. The very fact indicates that the factors currently monitored may not affect T/W, thus an effort has to be made to find other factors which are not currently monitored in order to understand the process better and improve the quality of it. In conclusion, VM application to CCL’s treating process was quite successful. The newly built quality prediction model allowed one to reduce the cost associated with actual metrology as well as reveal some insights on the factors affecting the important quality factors and on the level of our less than perfect understanding of the treating process.

Keywords: copper clad laminate, predictive modeling, quality control, virtual metrology

Procedia PDF Downloads 352
2980 Nietzsche and Shakti: An Intercultural Analysis of Nietzsche's Experiment with the Eternal Feminine

Authors: Shruti Jain

Abstract:

During its independence struggle in the early 20th century, India witnessed trends of politicisation of various spiritual paths, one of them being that of Shaktism. Interestingly, Nietzsche’s teachings were being interpreted as being essentially the worship of Shakti. The present paper aims at investigating this claim and hence undertakes an intercultural archaeological excavation in the realm of the Goddess archetypes that Nietzsche’s work invokes. Ariadne is placed next to Radha, Baubo to Lajja Gauri, Medusa to Chhinnamasta, Hecate to Kali and Dhumavati and Athena to Sarawati. Indeed, the Eternal Feminine plays a vital role in Nietzsche’s writings. One might recall that Nietzsche even declared himself to be the first Psychologist of the Eternal Feminine. The present paper aims to illustrate how, the matter of the Eternal Feminine, like all other matters, is subjected to Nietzsche’s basic creative principle of transvaluation of values and new meaning making. In order to achieve this, Nietzsche applies what Heidegger calls a 'cross-wise striking-through' technique in his analysis of what can be termed as his engagement with Shaktism. Hence, not only is the mystical ascent and descent of the creative energy (Kundalini Shakti) dealt with under erasure in Thus Spake Zarathustra, but coincidentally also the Three Metamorphoses emerge as an instance of such an erasure, making the Devi invisible and yet not so invisible for an Indian reader.

Keywords: eternal feminine, Nietzsche and India, Shaktism, transvaluation of values

Procedia PDF Downloads 162
2979 Bioinformatics High Performance Computation and Big Data

Authors: Javed Mohammed

Abstract:

Right now, bio-medical infrastructure lags well behind the curve. Our healthcare system is dispersed and disjointed; medical records are a bit of a mess; and we do not yet have the capacity to store and process the crazy amounts of data coming our way from widespread whole-genome sequencing. And then there are privacy issues. Despite these infrastructure challenges, some researchers are plunging into bio medical Big Data now, in hopes of extracting new and actionable knowledge. They are doing delving into molecular-level data to discover bio markers that help classify patients based on their response to existing treatments; and pushing their results out to physicians in novel and creative ways. Computer scientists and bio medical researchers are able to transform data into models and simulations that will enable scientists for the first time to gain a profound under-standing of the deepest biological functions. Solving biological problems may require High-Performance Computing HPC due either to the massive parallel computation required to solve a particular problem or to algorithmic complexity that may range from difficult to intractable. Many problems involve seemingly well-behaved polynomial time algorithms (such as all-to-all comparisons) but have massive computational requirements due to the large data sets that must be analyzed. High-throughput techniques for DNA sequencing and analysis of gene expression have led to exponential growth in the amount of publicly available genomic data. With the increased availability of genomic data traditional database approaches are no longer sufficient for rapidly performing life science queries involving the fusion of data types. Computing systems are now so powerful it is possible for researchers to consider modeling the folding of a protein or even the simulation of an entire human body. This research paper emphasizes the computational biology's growing need for high-performance computing and Big Data. It illustrates this article’s indispensability in meeting the scientific and engineering challenges of the twenty-first century, and how Protein Folding (the structure and function of proteins) and Phylogeny Reconstruction (evolutionary history of a group of genes) can use HPC that provides sufficient capability for evaluating or solving more limited but meaningful instances. This article also indicates solutions to optimization problems, and benefits Big Data and Computational Biology. The article illustrates the Current State-of-the-Art and Future-Generation Biology of HPC Computing with Big Data.

Keywords: high performance, big data, parallel computation, molecular data, computational biology

Procedia PDF Downloads 366
2978 Causal Inference Engine between Continuous Emission Monitoring System Combined with Air Pollution Forecast Modeling

Authors: Yu-Wen Chen, Szu-Wei Huang, Chung-Hsiang Mu, Kelvin Cheng

Abstract:

This paper developed a data-driven based model to deal with the causality between the Continuous Emission Monitoring System (CEMS, by Environmental Protection Administration, Taiwan) in industrial factories, and the air quality around environment. Compared to the heavy burden of traditional numerical models of regional weather and air pollution simulation, the lightweight burden of the proposed model can provide forecasting hourly with current observations of weather, air pollution and emissions from factories. The observation data are included wind speed, wind direction, relative humidity, temperature and others. The observations can be collected real time from Open APIs of civil IoT Taiwan, which are sourced from 439 weather stations, 10,193 qualitative air stations, 77 national quantitative stations and 140 CEMS quantitative industrial factories. This study completed a causal inference engine and gave an air pollution forecasting for the next 12 hours related to local industrial factories. The outcomes of the pollution forecasting are produced hourly with a grid resolution of 1km*1km on IIoTC (Industrial Internet of Things Cloud) and saved in netCDF4 format. The elaborated procedures to generate forecasts comprise data recalibrating, outlier elimination, Kriging Interpolation and particle tracking and random walk techniques for the mechanisms of diffusion and advection. The solution of these equations reveals the causality between factories emission and the associated air pollution. Further, with the aid of installed real-time flue emission (Total Suspension Emission, TSP) sensors and the mentioned forecasted air pollution map, this study also disclosed the converting mechanism between the TSP and PM2.5/PM10 for different region and industrial characteristics, according to the long-term data observation and calibration. These different time-series qualitative and quantitative data which successfully achieved a causal inference engine in cloud for factory management control in practicable. Once the forecasted air quality for a region is marked as harmful, the correlated factories are notified and asked to suppress its operation and reduces emission in advance.

Keywords: continuous emission monitoring system, total suspension particulates, causal inference, air pollution forecast, IoT

Procedia PDF Downloads 89
2977 Seismic Hazard Prediction Using Seismic Bumps: Artificial Neural Network Technique

Authors: Belkacem Selma, Boumediene Selma, Tourkia Guerzou, Abbes Labdelli

Abstract:

Natural disasters have occurred and will continue to cause human and material damage. Therefore, the idea of "preventing" natural disasters will never be possible. However, their prediction is possible with the advancement of technology. Even if natural disasters are effectively inevitable, their consequences may be partly controlled. The rapid growth and progress of artificial intelligence (AI) had a major impact on the prediction of natural disasters and risk assessment which are necessary for effective disaster reduction. The Earthquakes prediction to prevent the loss of human lives and even property damage is an important factor; that is why it is crucial to develop techniques for predicting this natural disaster. This present study aims to analyze the ability of artificial neural networks (ANNs) to predict earthquakes that occur in a given area. The used data describe the problem of high energy (higher than 10^4J) seismic bumps forecasting in a coal mine using two long walls as an example. For this purpose, seismic bumps data obtained from mines has been analyzed. The results obtained show that the ANN with high accuracy was able to predict earthquake parameters; the classification accuracy through neural networks is more than 94%, and that the models developed are efficient and robust and depend only weakly on the initial database.

Keywords: earthquake prediction, ANN, seismic bumps

Procedia PDF Downloads 133
2976 Investigating the Influence of Activation Functions on Image Classification Accuracy via Deep Convolutional Neural Network

Authors: Gulfam Haider, sana danish

Abstract:

Convolutional Neural Networks (CNNs) have emerged as powerful tools for image classification, and the choice of optimizers profoundly affects their performance. The study of optimizers and their adaptations remains a topic of significant importance in machine learning research. While numerous studies have explored and advocated for various optimizers, the efficacy of these optimization techniques is still subject to scrutiny. This work aims to address the challenges surrounding the effectiveness of optimizers by conducting a comprehensive analysis and evaluation. The primary focus of this investigation lies in examining the performance of different optimizers when employed in conjunction with the popular activation function, Rectified Linear Unit (ReLU). By incorporating ReLU, known for its favorable properties in prior research, the aim is to bolster the effectiveness of the optimizers under scrutiny. Specifically, we evaluate the adjustment of these optimizers with both the original Softmax activation function and the modified ReLU activation function, carefully assessing their impact on overall performance. To achieve this, a series of experiments are conducted using a well-established benchmark dataset for image classification tasks, namely the Canadian Institute for Advanced Research dataset (CIFAR-10). The selected optimizers for investigation encompass a range of prominent algorithms, including Adam, Root Mean Squared Propagation (RMSprop), Adaptive Learning Rate Method (Adadelta), Adaptive Gradient Algorithm (Adagrad), and Stochastic Gradient Descent (SGD). The performance analysis encompasses a comprehensive evaluation of the classification accuracy, convergence speed, and robustness of the CNN models trained with each optimizer. Through rigorous experimentation and meticulous assessment, we discern the strengths and weaknesses of the different optimization techniques, providing valuable insights into their suitability for image classification tasks. By conducting this in-depth study, we contribute to the existing body of knowledge surrounding optimizers in CNNs, shedding light on their performance characteristics for image classification. The findings gleaned from this research serve to guide researchers and practitioners in making informed decisions when selecting optimizers and activation functions, thus advancing the state-of-the-art in the field of image classification with convolutional neural networks.

Keywords: deep neural network, optimizers, RMsprop, ReLU, stochastic gradient descent

Procedia PDF Downloads 134
2975 Visible Light Communication and Challenges

Authors: Hamid Sharif, Nazish Saleem Abbas, Muhammad Haris Jamil

Abstract:

Visible light communication is an emerging technology for almost a decade now; there is a growing need for VLC systems to overcome the challenges faced by radio frequency RF communication systems. With the advancement in the development of solid-state sources, in the future would replace incandescent and fluorescent light sources. These solid-state devices are not only to be used for illumination but can also be employed for communication and navigational purposes. The replacement of conventional illumination sources with highly efficient light-emitting diodes (LED's) (generally white light) will reduce energy consumption as well as environmental pollution. White LEDs dissipate very less power as compared to conventional light sources. The use of LED's is not only beneficial in terms of power consumption, but it also has an intrinsic capability for indoor wireless communication as compared to indoor RF communication. It is considerably low in cost to operate than the RF systems such as Wi-Fi routers, allows convenient means of reusing the bandwidth, and there is a huge potential for high data rate transmissions with enhanced data security. This paper provides an overview of some of the current challenges with VLC and proposes a possible solution to deal with these challenges; it also examines some joint protocols to optimize the joint illumination and communication functionality.

Keywords: visible light communication, line of sight, root mean square delay spread, light emitting diodes

Procedia PDF Downloads 76
2974 Guadua Bamboo as Eco-Friendly Element in Interior Design and Architecture

Authors: Sarah Noaman

Abstract:

Utilizing renewable resources has become extensive solution for most problems in Egypt nowadays. It plays role in environmental issues such as energy crisis, lake of natural resources and climate change. This paper focuses on the importance of working with the key concepts of creating eco-friendly spaces in Egypt by using traditional perennial plants, such as Guadua bamboo as renewable resources in structures manufacture. Egypt is in critical need to search for alternative raw materials. Thus, this paper focuses on studying the usage of neglected yet affordable materials, such as Guadua bamboo in light weight structures and digital fabrication. Guadua bamboo has been cultivated throughout in tropical and subtropical areas. In Egypt, they exist in many rural areas where people try to control their growth by using pesticides as it serves no economic purpose. This paper aims to discuss the usage of Guadua bamboo either in its original state or after fabrication in the context of interior design and architecture. The results will show the applicability of using perennial plants as complementary materials in the manufacturing processes; also the conclusion will focus the lights on the importance of re-forming shallow water plants in interior design and architecture.

Keywords: digital fabrication, Guadua bamboo, zero-waste material, sustainable material, interior architecture

Procedia PDF Downloads 155
2973 Fold and Thrust Belts Seismic Imaging and Interpretation

Authors: Sunjay

Abstract:

Plate tectonics is of very great significance as it represents the spatial relationships of volcanic rock suites at plate margins, the distribution in space and time of the conditions of different metamorphic facies, the scheme of deformation in mountain belts, or orogens, and the association of different types of economic deposit. Orogenic belts are characterized by extensive thrust faulting, movements along large strike-slip fault zones, and extensional deformation that occur deep within continental interiors. Within oceanic areas there also are regions of crustal extension and accretion in the backarc basins that are located on the landward sides of many destructive plate margins.Collisional orogens develop where a continent or island arc collides with a continental margin as a result of subduction. collisional and noncollisional orogens can be explained by differences in the strength and rheology of the continental lithosphere and by processes that influence these properties during orogenesis.Seismic Imaging Difficulties-In triangle zones, several factors reduce the effectiveness of seismic methods. The topography in the central part of the triangle zone is usually rugged and is associated with near-surface velocity inversions which degrade the quality of the seismic image. These characteristics lead to low signal-to-noise ratio, inadequate penetration of energy through overburden, poor geophone coupling with the surface and wave scattering. Depth Seismic Imaging Techniques-Seismic processing relates to the process of altering the seismic data to suppress noise, enhancing the desired signal (higher signal-to-noise ratio) and migrating seismic events to their appropriate location in space and depth. Processing steps generally include analysis of velocities, static corrections, moveout corrections, stacking and migration. Exploration seismology Bow-tie effect -Shadow Zones-areas with no reflections (dead areas). These are called shadow zones and are common in the vicinity of faults and other discontinuous areas in the subsurface. Shadow zones result when energy from a reflector is focused on receivers that produce other traces. As a result, reflectors are not shown in their true positions. Subsurface Discontinuities-Diffractions occur at discontinuities in the subsurface such as faults and velocity discontinuities (as at “bright spot” terminations). Bow-tie effect caused by the two deep-seated synclines. Seismic imaging of thrust faults and structural damage-deepwater thrust belts, Imaging deformation in submarine thrust belts using seismic attributes,Imaging thrust and fault zones using 3D seismic image processing techniques, Balanced structural cross sections seismic interpretation pitfalls checking, The seismic pitfalls can originate due to any or all of the limitations of data acquisition, processing, interpretation of the subsurface geology,Pitfalls and limitations in seismic attribute interpretation of tectonic features, Seismic attributes are routinely used to accelerate and quantify the interpretation of tectonic features in 3D seismic data. Coherence (or variance) cubes delineate the edges of megablocks and faulted strata, curvature delineates folds and flexures, while spectral components delineate lateral changes in thickness and lithology. Carbon capture and geological storage leakage surveillance because fault behave as a seal or a conduit for hydrocarbon transportation to a trap,etc.

Keywords: tectonics, seismic imaging, fold and thrust belts, seismic interpretation

Procedia PDF Downloads 73