Search results for: lexical complexity
572 Decision Making in Medicine and Treatment Strategies
Authors: Kamran Yazdanbakhsh, Somayeh Mahmoudi
Abstract:
Three reasons make good use of the decision theory in medicine: 1. Increased medical knowledge and their complexity makes it difficult treatment information effectively without resorting to sophisticated analytical methods, especially when it comes to detecting errors and identify opportunities for treatment from databases of large size. 2. There is a wide geographic variability of medical practice. In a context where medical costs are, at least in part, by the patient, these changes raise doubts about the relevance of the choices made by physicians. These differences are generally attributed to differences in estimates of probabilities of success of treatment involved, and differing assessments of the results on success or failure. Without explicit criteria for decision, it is difficult to identify precisely the sources of these variations in treatment. 3. Beyond the principle of informed consent, patients need to be involved in decision-making. For this, the decision process should be explained and broken down. A decision problem is to select the best option among a set of choices. The problem is what is meant by "best option ", or know what criteria guide the choice. The purpose of decision theory is to answer this question. The systematic use of decision models allows us to better understand the differences in medical practices, and facilitates the search for consensus. About this, there are three types of situations: situations certain, risky situations, and uncertain situations: 1. In certain situations, the consequence of each decision are certain. 2. In risky situations, every decision can have several consequences, the probability of each of these consequences is known. 3. In uncertain situations, each decision can have several consequences, the probability is not known. Our aim in this article is to show how decision theory can usefully be mobilized to meet the needs of physicians. The decision theory can make decisions more transparent: first, by clarifying the data systematically considered the problem and secondly by asking a few basic principles should guide the choice. Once the problem and clarified the decision theory provides operational tools to represent the available information and determine patient preferences, and thus assist the patient and doctor in their choices.Keywords: decision making, medicine, treatment strategies, patient
Procedia PDF Downloads 579571 An Unsupervised Domain-Knowledge Discovery Framework for Fake News Detection
Authors: Yulan Wu
Abstract:
With the rapid development of social media, the issue of fake news has gained considerable prominence, drawing the attention of both the public and governments. The widespread dissemination of false information poses a tangible threat across multiple domains of society, including politics, economy, and health. However, much research has concentrated on supervised training models within specific domains, their effectiveness diminishes when applied to identify fake news across multiple domains. To solve this problem, some approaches based on domain labels have been proposed. By segmenting news to their specific area in advance, judges in the corresponding field may be more accurate on fake news. However, these approaches disregard the fact that news records can pertain to multiple domains, resulting in a significant loss of valuable information. In addition, the datasets used for training must all be domain-labeled, which creates unnecessary complexity. To solve these problems, an unsupervised domain knowledge discovery framework for fake news detection is proposed. Firstly, to effectively retain the multidomain knowledge of the text, a low-dimensional vector for each news text to capture domain embeddings is generated. Subsequently, a feature extraction module utilizing the unsupervisedly discovered domain embeddings is used to extract the comprehensive features of news. Finally, a classifier is employed to determine the authenticity of the news. To verify the proposed framework, a test is conducted on the existing widely used datasets, and the experimental results demonstrate that this method is able to improve the detection performance for fake news across multiple domains. Moreover, even in datasets that lack domain labels, this method can still effectively transfer domain knowledge, which can educe the time consumed by tagging without sacrificing the detection accuracy.Keywords: fake news, deep learning, natural language processing, multiple domains
Procedia PDF Downloads 96570 Efficient Compact Micro Dielectric Barrier Discharge (DBD) Plasma Reactor for Ozone Generation for Industrial Application in Liquid and Gas Phase Systems
Authors: D. Kuvshinov, A. Siswanto, J. Lozano-Parada, W. Zimmerman
Abstract:
Ozone is well known as a powerful fast reaction rate oxidant. The ozone based processes produce no by-product left as a non-reacted ozone returns back to the original oxygen molecule. Therefore an application of ozone is widely accepted as one of the main directions for a sustainable and clean technologies development. There are number of technologies require ozone to be delivered to specific points of a production network or reactors construction. Due to space constrains, high reactivity and short life time of ozone the use of ozone generators even of a bench top scale is practically limited. This requires development of mini/micro scale ozone generator which can be directly incorporated into production units. Our report presents a feasibility study of a new micro scale rector for ozone generation (MROG). Data on MROG calibration and indigo decomposition at different operation conditions are presented. At selected operation conditions with residence time of 0.25 s the process of ozone generation is not limited by reaction rate and the amount of ozone produced is a function of power applied. It was shown that the MROG is capable to produce ozone at voltage level starting from 3.5kV with ozone concentration of 5.28E-6 (mol/L) at 5kV. This is in line with data presented on numerical investigation for a MROG. It was shown that in compare to a conventional ozone generator, MROG has lower power consumption at low voltages and atmospheric pressure. The MROG construction makes it applicable for emerged and dry systems. With a robust compact design MROG can be used as incorporated unit for production lines of high complexity.Keywords: dielectric barrier discharge (DBD), micro reactor, ozone, plasma
Procedia PDF Downloads 338569 Interrogating Bishwas: Reimagining a Christian Neighbourhood in Kolkata, India
Authors: Abhijit Dasgupta
Abstract:
This paper explores the everyday lives of the Christians residing in a Bengali Christian neighborhood in Kolkata, termed here as the larger Christian para (para meaning neighborhood in Bengali). Through ethnography and reading of secondary sources, the paper discerns how various Christians across denominations – Protestants, Catholics and Pentecostals implicate the role of bishwas (faith and belief) in their interpersonal neighborhood relations. The paper attempts to capture the role of bishwas in producing, transforming and revising the meaning of 'neighbourhood' and 'neighbours' and puts forward the argument of the neighbourhood as a theological product. By interrogating and interpreting bishwas through everyday theological discussions and reflections, the paper examines and analyses the ways everyday theology becomes an essential source of power and knowledge for the Bengali Christians in reimagining their neighbourhood compared to the nearby Hindu neighbourhoods. Borrowing literature from everyday theology, faith and belief, the paper reads and analyses various interpretations of theological knowledge across denominations to probe the prominence of bishwas within the Christian community and its role in creating a difference in their place of dwelling. The paper argues that the meaning of neighbourhood is revisited through prayers, sermons and biblical verses. At the same time, the divisions and fissures are seen among Protestants and Catholics and also among native Bengali Protestants and non-native Protestant pastors, which informs us about the complexity of theology in constituting everyday life. Thus, the paper addresses theology's role in creating an ethical Christian neighbourhood amidst everyday tensions and hostilities of diverse religious persuasions. At the same time, it looks into the processes through which multiple theological knowledge leads to schism and interdenominational hostilities. By attempting to answer these questions, the paper brings out Christians' negotiation with the neighbourhood.Keywords: anthropology, bishwas, christianity, neighbourhood, theology
Procedia PDF Downloads 87568 Characterization of Chest Pain in Patients Consulting to the Emergency Department of a Health Institution High Level of Complexity during 2014-2015, Medellin, Colombia
Authors: Jorge Iván Bañol-Betancur, Lina María Martínez-Sánchez, María de los Ángeles Rodríguez-Gázquez, Estefanía Bahamonde-Olaya, Ana María Gutiérrez-Tamayo, Laura Isabel Jaramillo-Jaramillo, Camilo Ruiz-Mejía, Natalia Morales-Quintero
Abstract:
Acute chest pain is a distressing sensation between the diaphragm and the base of the neck and it represents a diagnostic challenge for any physician in the emergency department. Objective: To establish the main clinical and epidemiological characteristics of patients who present with chest pain to the emergency department in a private clinic from the city of Medellin, during 2014-2015. Methods: Cross-sectional retrospective observational study. Population and sample were patients who consulted for chest pain in the emergency department who met the eligibility criteria. The information was analyzed in SPSS program vr.21; qualitative variables were described through relative frequencies, and the quantitative through mean and standard deviation or medians according to their distribution in the study population. Results: A total of 231 patients were evaluated, the mean age was 49.5 ± 19.9 years, 56.7% were females. The most frequent pathological antecedents were hypertension 35.5%, diabetes 10,8%, dyslipidemia 10.4% and coronary disease 5.2%. Regarding pain features, in 40.3% of the patients the pain began abruptly, in 38.2% it had a precordial location, for 20% of the cases physical activity acted as a trigger, and 60.6% was oppressive. Costochondritis was the most common cause of chest pain among patients with an established etiologic diagnosis, representing the 18.2%. Conclusions: Although the clinical features of pain reported coincide with the clinical presentation of an acute coronary syndrome, the most common cause of chest pain in study population was costochondritis instead, indicating that it is a differential diagnostic in the approach of patients with pain acute chest.Keywords: acute coronary syndrome, chest pain, epidemiology, osteochondritis
Procedia PDF Downloads 343567 Comparison of Different Artificial Intelligence-Based Protein Secondary Structure Prediction Methods
Authors: Jamerson Felipe Pereira Lima, Jeane Cecília Bezerra de Melo
Abstract:
The difficulty and cost related to obtaining of protein tertiary structure information through experimental methods, such as X-ray crystallography or NMR spectroscopy, helped raising the development of computational methods to do so. An approach used in these last is prediction of tridimensional structure based in the residue chain, however, this has been proved an NP-hard problem, due to the complexity of this process, explained by the Levinthal paradox. An alternative solution is the prediction of intermediary structures, such as the secondary structure of the protein. Artificial Intelligence methods, such as Bayesian statistics, artificial neural networks (ANN), support vector machines (SVM), among others, were used to predict protein secondary structure. Due to its good results, artificial neural networks have been used as a standard method to predict protein secondary structure. Recent published methods that use this technique, in general, achieved a Q3 accuracy between 75% and 83%, whereas the theoretical accuracy limit for protein prediction is 88%. Alternatively, to achieve better results, support vector machines prediction methods have been developed. The statistical evaluation of methods that use different AI techniques, such as ANNs and SVMs, for example, is not a trivial problem, since different training sets, validation techniques, as well as other variables can influence the behavior of a prediction method. In this study, we propose a prediction method based on artificial neural networks, which is then compared with a selected SVM method. The chosen SVM protein secondary structure prediction method is the one proposed by Huang in his work Extracting Physico chemical Features to Predict Protein Secondary Structure (2013). The developed ANN method has the same training and testing process that was used by Huang to validate his method, which comprises the use of the CB513 protein data set and three-fold cross-validation, so that the comparative analysis of the results can be made comparing directly the statistical results of each method.Keywords: artificial neural networks, protein secondary structure, protein structure prediction, support vector machines
Procedia PDF Downloads 621566 Comparison of Support Vector Machines and Artificial Neural Network Classifiers in Characterizing Threatened Tree Species Using Eight Bands of WorldView-2 Imagery in Dukuduku Landscape, South Africa
Authors: Galal Omer, Onisimo Mutanga, Elfatih M. Abdel-Rahman, Elhadi Adam
Abstract:
Threatened tree species (TTS) play a significant role in ecosystem functioning and services, land use dynamics, and other socio-economic aspects. Such aspects include ecological, economic, livelihood, security-based, and well-being benefits. The development of techniques for mapping and monitoring TTS is thus critical for understanding the functioning of ecosystems. The advent of advanced imaging systems and supervised learning algorithms has provided an opportunity to classify TTS over fragmenting landscape. Recently, vegetation maps have been produced using advanced imaging systems such as WorldView-2 (WV-2) and robust classification algorithms such as support vectors machines (SVM) and artificial neural network (ANN). However, delineation of TTS in a fragmenting landscape using high resolution imagery has widely remained elusive due to the complexity of the species structure and their distribution. Therefore, the objective of the current study was to examine the utility of the advanced WV-2 data for mapping TTS in the fragmenting Dukuduku indigenous forest of South Africa using SVM and ANN classification algorithms. The results showed the robustness of the two machine learning algorithms with an overall accuracy (OA) of 77.00% (total disagreement = 23.00%) for SVM and 75.00% (total disagreement = 25.00%) for ANN using all eight bands of WV-2 (8B). This study concludes that SVM and ANN classification algorithms with WV-2 8B have the potential to classify TTS in the Dukuduku indigenous forest. This study offers relatively accurate information that is important for forest managers to make informed decisions regarding management and conservation protocols of TTS.Keywords: artificial neural network, threatened tree species, indigenous forest, support vector machines
Procedia PDF Downloads 515565 Meet Automotive Software Safety and Security Standards Expectations More Quickly
Authors: Jean-François Pouilly
Abstract:
This study addresses the growing complexity of embedded systems and the critical need for secure, reliable software. Traditional cybersecurity testing methods, often conducted late in the development cycle, struggle to keep pace. This talk explores how formal methods, integrated with advanced analysis tools, empower C/C++ developers to 1) Proactively address vulnerabilities and bugs, which includes formal methods and abstract interpretation techniques to identify potential weaknesses early in the development process, reducing the reliance on penetration and fuzz testing in later stages. 2) Streamline development by focusing on bugs that matter, with close to no false positives and catching flaws earlier, the need for rework and retesting is minimized, leading to faster development cycles, improved efficiency and cost savings. 3) Enhance software dependability which includes combining static analysis using abstract interpretation with full context sensitivity, with hardware memory awareness allows for a more comprehensive understanding of potential vulnerabilities, leading to more dependable and secure software. This approach aligns with industry best practices (ISO2626 or ISO 21434) and empowers C/C++ developers to deliver robust, secure embedded systems that meet the demands of today's and tomorrow's applications. We will illustrate this approach with the TrustInSoft analyzer to show how it accelerates verification for complex cases, reduces user fatigue, and improves developer efficiency, cost-effectiveness, and software cybersecurity. In summary, integrating formal methods and sound Analyzers enhances software reliability and cybersecurity, streamlining development in an increasingly complex environment.Keywords: safety, cybersecurity, ISO26262, ISO24434, formal methods
Procedia PDF Downloads 19564 Narrative Constructs and Environmental Engagement: A Textual Analysis of Climate Fiction’s Role in Shaping Sustainability Consciousness
Authors: Dean J. Hill
Abstract:
This paper undertakes the task of conducting an in-depth textual analysis of the cli-fi genre. It examines how writing in the genre contributes to expressing and facilitating the articulation of environmental consciousness through the form of narrative. The paper begins by situating cli-fi within the literary continuum of ecological narratives and identifying the unique textual characteristics and thematic preoccupations of this area. The paper unfolds how cli-fi transforms the esoteric nature of climate science into credible narrative forms by drawing on language use, metaphorical constructs, and narrative framing. It also involves how descriptive and figurative language in the description of nature and disaster makes climate change so vivid and emotionally resonant. The work also points out the dialogic nature of cli-fi, whereby the characters and the narrators experience inner disputes in the novel regarding the ethical dilemma of environmental destruction, thus demanding the readers challenge and re-evaluate their standpoints on sustainability and ecological responsibilities. The paper proceeds with analysing the feature of narrative voice and its role in eliciting empathy, as well as reader involvement with the ecological material. In looking at how different narratorial perspectives contribute to the emotional and cognitive reaction of the reader to text, this study demonstrates the profound power of perspective in developing intimacy with the dominating concerns. Finally, the emotional arc of cli-fi narratives, running its course over themes of loss, hope, and resilience, is analysed in relation to how these elements function to marshal public feeling and discourse into action around climate change. Therefore, we can say that the complexity of the text in the cli-fi not only shows the hard edge of the reality of climate change but also influences public perception and behaviour toward a more sustainable future.Keywords: cli-fi genre, ecological narratives, emotional arc, narrative voice, public perception
Procedia PDF Downloads 31563 Organization of the Purchasing Function for Innovation
Authors: Jasna Prester, Ivana Rašić Bakarić, Božidar Matijević
Abstract:
Various prominent scholars and substantial practitioner-oriented literature on innovation orientation have shown positive effects on firm performance. There is a myriad of factors that influence and enhance innovation but it has been found in the literature that new product innovations accounted for an average of 14 percent of sales revenues for all firms. If there is one thing that has changed in innovation management during the last decade, it is the growing reliance on external partners. As a consequence, a new task for purchasing arises, as firms need to understand which suppliers actually do have high potential contributing to the innovativeness of the firm and which do not. Purchasing function in an organization is extremely important as it deals on an average of 50% or more of a firm's expenditures. In the nineties the purchasing department was largely seen as a transaction-oriented, clerical function but today purchasing integration provides a formal interface mechanism between purchasing and other firm functions that services other functions within the company. Purchasing function has to be organized differently to enable firm innovation potential. However, innovations are inherently risky. There are behavioral risk (that some partner will take advantage of the other party), technological risk in terms of complexity of products and processes of manufacturing and incoming materials and finally market risks, which in fact judge the value of the innovation. These risks are investigated in this work since it has been found in the literature that the higher the technological risk, higher will be the centralization of the purchasing function as an interface with other supply chain members. Most researches on organization of purchasing function were done by case study analysis of innovative firms. This work actually tends to prove or discard results found in the literature based on case study method. A large data set of 1493 companies, from 25 countries collected in the GMRG 4 survey served as a basis for analysis.Keywords: purchasing function organization, innovation, technological risk, GMRG 4 survey
Procedia PDF Downloads 482562 Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing
Authors: Yehjune Heo
Abstract:
As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.Keywords: anti-spoofing, CNN, fingerprint recognition, loss function, optimizer
Procedia PDF Downloads 136561 Parameters Influencing Human Machine Interaction in Hospitals
Authors: Hind Bouami
Abstract:
Handling life-critical systems complexity requires to be equipped with appropriate technology and the right human agents’ functions such as knowledge, experience, and competence in problem’s prevention and solving. Human agents are involved in the management and control of human-machine system’s performance. Documenting human agent’s situation awareness is crucial to support human-machine designers’ decision-making. Knowledge about risks, critical parameters and factors that can impact and threaten automation system’s performance should be collected using preventive and retrospective approaches. This paper aims to document operators’ situation awareness through the analysis of automated organizations’ feedback. The analysis of automated hospital pharmacies feedbacks helps to identify and control critical parameters influencing human machine interaction in order to enhance system’s performance and security. Our human machine system evaluation approach has been deployed in Macon hospital center’s pharmacy which is equipped with automated drug dispensing systems since 2015. Automation’s specifications are related to technical aspects, human-machine interaction, and human aspects. The evaluation of drug delivery automation performance in Macon hospital center has shown that the performance of the automated activity depends on the performance of the automated solution chosen, and also on the control of systemic factors. In fact, 80.95% of automation specification related to the chosen Sinteco’s automated solution is met. The performance of the chosen automated solution is involved in 28.38% of automation specifications performance in Macon hospital center. The remaining systemic parameters involved in automation specifications performance need to be controlled.Keywords: life-critical systems, situation awareness, human-machine interaction, decision-making
Procedia PDF Downloads 181560 Integration of Polarization States and Color Multiplexing through a Singular Metasurface
Authors: Tarik Sipahi
Abstract:
Photonics research continues to push the boundaries of optical science, and the development of metasurface technology has emerged as a transformative force in this domain. The work presents the intricacies of a unified metasurface design tailored for efficient polarization and color control in optical systems. The proposed unified metasurface serves as a singular, nanoengineered optical element capable of simultaneous polarization modulation and color encoding. Leveraging principles from metamaterials and nanophotonics, this design allows for unprecedented control over the behavior of light at the subwavelength scale. The metasurface's spatially varying architecture enables seamless manipulation of both polarization states and color wavelengths, paving the way for a paradigm shift in optical system design. The advantages of this unified metasurface are diverse and impactful. By consolidating functions that traditionally require multiple optical components, the design streamlines optical systems, reducing complexity and enhancing overall efficiency. This approach is particularly promising for applications where compactness, weight considerations, and multifunctionality are crucial. Furthermore, the proposed unified metasurface design not only enhances multifunctionality but also addresses key challenges in optical system design, offering a versatile solution for applications demanding compactness and lightweight structures. The metasurface's capability to simultaneously manipulate polarization and color opens new possibilities in diverse technological fields. The research contributes to the evolution of optical science by showcasing the transformative potential of metasurface technology, emphasizing its role in reshaping the landscape of optical system architectures. This work represents a significant step forward in the ongoing pursuit of pushing the boundaries of photonics, providing a foundation for future innovations in compact and efficient optical devices.Keywords: metasurface, nanophotonics, optical system design, polarization control
Procedia PDF Downloads 53559 Generative Design Method for Cooled Additively Manufactured Gas Turbine Parts
Authors: Thomas Wimmer, Bernhard Weigand
Abstract:
The improvement of gas turbine efficiency is one of the main drivers of research and development in the gas turbine market. This has led to elevated gas turbine inlet temperatures beyond the melting point of the utilized materials. The turbine parts need to be actively cooled in order to withstand these harsh environments. However, the usage of compressor air as coolant decreases the overall gas turbine efficiency. Thus, coolant consumption needs to be minimized in order to gain the maximum advantage from higher turbine inlet temperatures. Therefore, sophisticated cooling designs for gas turbine parts aim to minimize coolant mass flow. New design space is accessible as additive manufacturing is maturing to industrial usage for the creation of hot gas flow path parts. By making use of this technology more efficient cooling schemes can be manufacture. In order to find such cooling schemes a generative design method is being developed. It generates cooling schemes randomly which adhere to a set of rules. These assure the sanity of the design. A huge amount of different cooling schemes are generated and implemented in a simulation environment where it is validated. Criteria for the fitness of the cooling schemes are coolant mass flow, maximum temperature and temperature gradients. This way the whole design space is sampled and a Pareto optimum front can be identified. This approach is applied to a flat plate, which resembles a simplified section of a hot gas flow path part. Realistic boundary conditions are applied and thermal barrier coating is accounted for in the simulation environment. The resulting cooling schemes are presented and compared to representative conventional cooling schemes. Further development of this method can give access to cooling schemes with an even better performance having higher complexity, which makes use of the available design space.Keywords: additive manufacturing, cooling, gas turbine, heat transfer, heat transfer design, optimization
Procedia PDF Downloads 352558 Embracing Complex Femininity: A Comparative Analysis of the Representation of Female Sexuality in John Webster and William Faulkner
Authors: Elisabeth Pedersen
Abstract:
Representations and interpretations of womanhood and female sexualities bring forth various questions regarding gender norms, and the implications of these norms, which are permeating and repetitive within various societies. Literature is one form of media which provides the space to represent and interpret women, their bodies, and sexualities, and also reveals the power of language as an affective and affected force. As literature allows an opportunity to explore history and the representations of gender, power dynamics, and sexuality through historical contexts, this paper uses engaged theory through a comparative analysis of two work of literature, The Duchess of Malfi by John Wester, and The Sound and the Fury by William Faulkner. These novels span across space and time, which lends to the theory that repetitive tropes of womanhood and female sexuality in literature are influenced by and have an influence on the hegemonic social order throughout history. It analyzes how the representation of the dichotomy of male chivalry and honor, and female purity are disputed and questioned when a woman is portrayed as sexually emancipated, and explores the historical context in which these works were written to examine how socioeconomic events challenged the hegemonic social order. The analysis looks at how stereotypical ideals of womanhood and manhood have damaging implications on women, as the structure of society provides more privilege and power to men than to women, thus creating a double standard for men and women in regards to sexuality, sexual expression, and rights to sexual desire. This comparative analysis reveals how strict gender norms are permeating and have negative consequences. However, re-reading stories through a critical lens can provide an opportunity to challenge the repetitive tropes of female sexuality, and thus lead to the embrace of the complexity of female sexuality and expression.Keywords: femininity, literature, representation, sexuality
Procedia PDF Downloads 358557 In situ Real-Time Multivariate Analysis of Methanolysis Monitoring of Sunflower Oil Using FTIR
Authors: Pascal Mwenge, Tumisang Seodigeng
Abstract:
The combination of world population and the third industrial revolution led to high demand for fuels. On the other hand, the decrease of global fossil 8fuels deposits and the environmental air pollution caused by these fuels has compounded the challenges the world faces due to its need for energy. Therefore, new forms of environmentally friendly and renewable fuels such as biodiesel are needed. The primary analytical techniques for methanolysis yield monitoring have been chromatography and spectroscopy, these methods have been proven reliable but are more demanding, costly and do not provide real-time monitoring. In this work, the in situ monitoring of biodiesel from sunflower oil using FTIR (Fourier Transform Infrared) has been studied; the study was performed using EasyMax Mettler Toledo reactor equipped with a DiComp (Diamond) probe. The quantitative monitoring of methanolysis was performed by building a quantitative model with multivariate calibration using iC Quant module from iC IR 7.0 software. 15 samples of known concentrations were used for the modelling which were taken in duplicate for model calibration and cross-validation, data were pre-processed using mean centering and variance scale, spectrum math square root and solvent subtraction. These pre-processing methods improved the performance indexes from 7.98 to 0.0096, 11.2 to 3.41, 6.32 to 2.72, 0.9416 to 0.9999, RMSEC, RMSECV, RMSEP and R2Cum, respectively. The R2 value of 1 (training), 0.9918 (test), 0.9946 (cross-validation) indicated the fitness of the model built. The model was tested against univariate model; small discrepancies were observed at low concentration due to unmodelled intermediates but were quite close at concentrations above 18%. The software eliminated the complexity of the Partial Least Square (PLS) chemometrics. It was concluded that the model obtained could be used to monitor methanol of sunflower oil at industrial and lab scale.Keywords: biodiesel, calibration, chemometrics, methanolysis, multivariate analysis, transesterification, FTIR
Procedia PDF Downloads 148556 Multiple-Material Flow Control in Construction Supply Chain with External Storage Site
Authors: Fatmah Almathkour
Abstract:
Managing and controlling the construction supply chain (CSC) are very important components of effective construction project execution. The goals of managing the CSC are to reduce uncertainty and optimize the performance of a construction project by improving efficiency and reducing project costs. The heart of much SC activity is addressing risk, and the CSC is no different. The delivery and consumption of construction materials is highly variable due to the complexity of construction operations, rapidly changing demand for certain components, lead time variability from suppliers, transportation time variability, and disruptions at the job site. Current notions of managing and controlling CSC, involve focusing on one project at a time with a push-based material ordering system based on the initial construction schedule and, then, holding a tremendous amount of inventory. A two-stage methodology was proposed to coordinate the feed-forward control of advanced order placement with a supplier to a feedback local control in the form of adding the ability to transship materials between projects to improve efficiency and reduce costs. It focused on the single supplier integrated production and transshipment problem with multiple products. The methodology is used as a design tool for the CSC because it includes an external storage site not associated with one of the projects. The idea is to add this feature to a highly constrained environment to explore its effectiveness in buffering the impact of variability and maintaining project schedule at low cost. The methodology uses deterministic optimization models with objectives that minimizing the total cost of the CSC. To illustrate how this methodology can be used in practice and the types of information that can be gleaned, it is tested on a number of cases based on the real example of multiple construction projects in Kuwait.Keywords: construction supply chain, inventory control supply chain, transshipment
Procedia PDF Downloads 122555 Guidelines for the Sustainable Development of Agriphotovoltaics in Orchard Cultivation: An Approach for Their Harmonious Application in the Natural, Landscape and Socio-Cultural Context of South Tyrol
Authors: Fabrizio Albion
Abstract:
In response to the escalating recognition of the need to combat climate change, renewable energy sources (RES), particularly solar energy, have witnessed exponential growth. The intricate nature of agriphotovoltaics, which combines agriculture and solar energy production, demands rapid legislative and technological development, facing various challenges and multifaceted design. This complexity is also represented by its application for orchard cultivation (APVO), which, in the first part of this research, was studied in its environmental, economic, and sociocultural aspects. Insights from literature, case studies, and consultations with experts contributed valuable perspectives, forming a robust foundation for understanding and integrating APVO into rural environments, including those in the South Tyrolean context. For its harmonious integration into the sensitive Alpine landscape, the second part was then dedicated to the development of guidelines, from the identification of the requirements to be defined as APVO to its design flexibilities for being integrated into the context. As a basis for further considerations, the drafting of these guidelines was preceded by a program of interviews conducted to investigate the social perceptions of farmers, citizens and tourists on the potential integration of APVO in the fruit-growing valleys of the province. Conclusive results from the data collected in the first phase are, however, still pending. Due to ongoing experiments and data collection, the current results, although being generally positive, cannot guarantee a definitive exclusion of potential negative impacts on the crop. The guidelines developed should, therefore, be understood as an initial exploration, providing a basis for future updates, also in synergy with the evolution of existing local projects.Keywords: agriphotovoltaics, Alpin agricultural landscapes, landscape impact assessment, renewable energy
Procedia PDF Downloads 17554 Interlingual Melodious Constructions: Romanian Translation of References to Songs in James Joyce’s Ulysses
Authors: Andra-Iulia Ursa
Abstract:
James Joyce employs several unconventional stylistic features in this landmark novel meant to experiment with language. The episode known as “Sirens” is entirely conceived around music and linguistic structures subordinated to sound. However, the aspiration to the condition of music is reflected throughout this entire literary work, as musical effects are echoed systematically. The numerous melodies scattered across the narrative play an important role in enhancing the thoughts and feelings that pass through the minds of the characters. Often the lyrics are distorted or interweaved with other words, preoccupations or memories, intensifying the stylistic effect. The Victorian song “Love’s old sweet song” is one of the most commonly referred to and meaningful musical allusions in Ulysses, becoming a leitmotif of infidelity. The lyrics of the song “M’appari”, from the opera “Martha”, are compared to an event from Molly and Bloom’s romantic history. Moreover, repeated phrases using words from “The bloom is on the rye” or “The croppy boy” serve as glances into the minds of the characters. Therefore, the central purpose of this study is to shed light on the way musical allusions flit through the episodes from the point of view of the stream of consciousness technique and to compare and analyse how these constructions are rendered into Romanian. Mircea Ivănescu, the single Romanian translator who succeeded in carrying out the translation of the entire ‘stylistic odyssey’, received both praises and disapprovals from the critics. This paper is not meant to call forth eventual flaws of the Romanian translation, but rather to elaborate the complexity of the task. Following an attentive examination and analysis of the two texts, from the point of view of form and meaning of the references to various songs, the conclusions of this study will be able to point out the intricacies of the process of translation.Keywords: Joyce, melodious constructions, stream of consciousness, style, translation
Procedia PDF Downloads 164553 Discrete-Event Modeling and Simulation Methodologies: Past, Present and Future
Authors: Gabriel Wainer
Abstract:
Modeling and Simulation methods have been used to better analyze the behavior of complex physical systems, and it is now common to use simulation as a part of the scientific and technological discovery process. M&S advanced thanks to the improvements in computer technology, which, in many cases, resulted in the development of simulation software using ad-hoc techniques. Formal M&S appeared in order to try to improve the development task of very complex simulation systems. Some of these techniques proved to be successful in providing a sound base for the development of discrete-event simulation models, improving the ease of model definition and enhancing the application development tasks; reducing costs and favoring reuse. The DEVS formalism is one of these techniques, which proved to be successful in providing means for modeling while reducing development complexity and costs. DEVS model development is based on a sound theoretical framework. The independence of M&S tasks made possible to run DEVS models on different environments (personal computers, parallel computers, real-time equipment, and distributed simulators) and middleware. We will present a historical perspective of discrete-event M&S methodologies, showing different modeling techniques. We will introduce DEVS origins and general ideas, and compare it with some of these techniques. We will then show the current status of DEVS M&S, and we will discuss a technological perspective to solve current M&S problems (including real-time simulation, interoperability, and model-centered development techniques). We will show some examples of the current use of DEVS, including applications in different fields. We will finally show current open topics in the area, which include advanced methods for centralized, parallel or distributed simulation, the need for real-time modeling techniques, and our view in these fields.Keywords: modeling and simulation, discrete-event simulation, hybrid systems modeling, parallel and distributed simulation
Procedia PDF Downloads 323552 Spectral Mixture Model Applied to Cannabis Parcel Determination
Authors: Levent Basayigit, Sinan Demir, Yusuf Ucar, Burhan Kara
Abstract:
Many research projects require accurate delineation of the different land cover type of the agricultural area. Especially it is critically important for the definition of specific plants like cannabis. However, the complexity of vegetation stands structure, abundant vegetation species, and the smooth transition between different seconder section stages make vegetation classification difficult when using traditional approaches such as the maximum likelihood classifier. Most of the time, classification distinguishes only between trees/annual or grain. It has been difficult to accurately determine the cannabis mixed with other plants. In this paper, a mixed distribution models approach is applied to classify pure and mix cannabis parcels using Worldview-2 imagery in the Lakes region of Turkey. Five different land use types (i.e. sunflower, maize, bare soil, and cannabis) were identified in the image. A constrained Gaussian mixture discriminant analysis (GMDA) was used to unmix the image. In the study, 255 reflectance ratios derived from spectral signatures of seven bands (Blue-Green-Yellow-Red-Rededge-NIR1-NIR2) were randomly arranged as 80% for training and 20% for test data. Gaussian mixed distribution model approach is proved to be an effective and convenient way to combine very high spatial resolution imagery for distinguishing cannabis vegetation. Based on the overall accuracies of the classification, the Gaussian mixed distribution model was found to be very successful to achieve image classification tasks. This approach is sensitive to capture the illegal cannabis planting areas in the large plain. This approach can also be used for monitoring and determination with spectral reflections in illegal cannabis planting areas.Keywords: Gaussian mixture discriminant analysis, spectral mixture model, Worldview-2, land parcels
Procedia PDF Downloads 197551 Hybridization of Manually Extracted and Convolutional Features for Classification of Chest X-Ray of COVID-19
Authors: M. Bilal Ishfaq, Adnan N. Qureshi
Abstract:
COVID-19 is the most infectious disease these days, it was first reported in Wuhan, the capital city of Hubei in China then it spread rapidly throughout the whole world. Later on 11 March 2020, the World Health Organisation (WHO) declared it a pandemic. Since COVID-19 is highly contagious, it has affected approximately 219M people worldwide and caused 4.55M deaths. It has brought the importance of accurate diagnosis of respiratory diseases such as pneumonia and COVID-19 to the forefront. In this paper, we propose a hybrid approach for the automated detection of COVID-19 using medical imaging. We have presented the hybridization of manually extracted and convolutional features. Our approach combines Haralick texture features and convolutional features extracted from chest X-rays and CT scans. We also employ a minimum redundancy maximum relevance (MRMR) feature selection algorithm to reduce computational complexity and enhance classification performance. The proposed model is evaluated on four publicly available datasets, including Chest X-ray Pneumonia, COVID-19 Pneumonia, COVID-19 CTMaster, and VinBig data. The results demonstrate high accuracy and effectiveness, with 0.9925 on the Chest X-ray pneumonia dataset, 0.9895 on the COVID-19, Pneumonia and Normal Chest X-ray dataset, 0.9806 on the Covid CTMaster dataset, and 0.9398 on the VinBig dataset. We further evaluate the effectiveness of the proposed model using ROC curves, where the AUC for the best-performing model reaches 0.96. Our proposed model provides a promising tool for the early detection and accurate diagnosis of COVID-19, which can assist healthcare professionals in making informed treatment decisions and improving patient outcomes. The results of the proposed model are quite plausible and the system can be deployed in a clinical or research setting to assist in the diagnosis of COVID-19.Keywords: COVID-19, feature engineering, artificial neural networks, radiology images
Procedia PDF Downloads 75550 New Biobased(Furanic-Sulfonated) Poly(esteramide)s
Authors: Souhir Abid
Abstract:
The growing interest in vegetal biomass as an alternative for fossil resources has stimulated the development of numerous classes of monomers. Polymers from renewable resources have attracted an increasing amount of attention over the last two decades, predominantly due to two major reasons (i) firstly environmental concerns, and (ii) secondly the use of monomers from renewable feedstock is a steadily growing field of interest in order to reduce the amount of petroleum consumed in the chemical industry and to open new high-value-added markets to agriculture. Furanic polymers have been considered as alternative environmentally friendly polymers. In our earlier work, modifying furanic polyesters by incorporation of amide functions along their backbone, lead to a particular class of polymer ‘poly(ester-amide)s’, was investigated to combine the excellent mechanical properties of polyamides and the biodegradability of polyesters. As a continuation of our studies on this family of polymer, a series of furanic poly(ester-amide)s bearing sulfonate groups in the main chain were synthesized from 5,5’-Isopropylidene-bis(ethyl 2-furoate), dimethyl 5-sodiosulfoisophthalate, ethylene glycol and hexamethylene diamine by melt polycondensation using zinc acetate as a catalyst. In view of the complexity of the NMR spectrum analysis of the resulting sulfonated poly(ester-amide)s, we found that it is useful to prepare initially the corresponding homopolymers: sulfonated polyesters and polyamides. Structural data of these polymers will be used as a basic element in 1H NMR characterization. The hydrolytic degradation in acidic aqueous conditions (pH = 4,35 ) at 37 °C over the period of four weeks show that the mechanism of the hydrolysis of poly(ester amide)s was elucidated in relation with the microstructure. The strong intermolecular hydrogen bonding interactions between amide functions and water molecules increases the hydrophilicity of the macromolecular chains and consequently their hydrolytic degradation.Keywords: furan, hydrolytic degradation, polycondensation, poly(ester amide)
Procedia PDF Downloads 294549 Flow Field Analysis of Different Intake Bump (Compression Surface) Configurations on a Supersonic Aircraft
Authors: Mudassir Ghafoor, Irsalan Arif, Shuaib Salamat
Abstract:
This paper presents modeling and analysis of different intake bump (compression surface) configurations and comparison with an existing supersonic aircraft having bump intake configuration. Many successful aircraft models have shown that Diverter less Supersonic Inlet (DSI) as compared to conventional intake can reduce weight, complexity and also maintenance cost. The research is divided into two parts. In the first part, four different intake bumps are modeled for comparative analysis keeping in view the consistency of outer perimeter dimensions of fighter aircraft and various characteristics such as flow behavior, boundary layer diversion and pressure recovery are analyzed. In the second part, modeled bumps are integrated with intake duct for performance analysis and comparison with existing supersonic aircraft data is carried out. The bumps are named as uniform large (Config 1), uniform small (Config 2), uniform sharp (Config 3), non-uniform (Config 4) based on their geometric features. Analysis is carried out at different Mach Numbers to analyze flow behavior in subsonic and supersonic regime. Flow behavior, boundary layer diversion and Pressure recovery are examined for each bump characteristics, and comparative study is carried out. The analysis reveals that at subsonic speed, Config 1 and Config 2 give similar pressure recoveries as diverterless supersonic intake, but difference in pressure recoveries becomes significant at supersonic speed. It was concluded from research that Config 1 gives better results as compared to Config 3. Also, higher amplitude (Config 1) is preferred over lower (Config 2 and 4). It was observed that maximum height of bump is preferred to be placed near cowl lip of intake duct.Keywords: bump intake, boundary layer, computational fluid dynamics, diverter-less supersonic inlet
Procedia PDF Downloads 243548 Low-Impact Development Strategies Assessment for Urban Design
Abstract:
Climate change and land-use change caused by urban expansion increase the frequency of urban flooding. To mitigate the increase in runoff volume, low-impact development (LID) is a green approach for reducing the area of impervious surface and managing stormwater at the source with decentralized micro-scale control measures. However, the current benefit assessment and practical application of LID in Taiwan is still tending to be development plan in the community and building site scales. As for urban design, site-based moisture-holding capacity has been common index for evaluating LID’s effectiveness of urban design, which ignore the diversity, and complexity of the urban built environments, such as different densities, positive and negative spaces, volumes of building and so on. Such inflexible regulations not only probably make difficulty for most of the developed areas to implement, but also not suitable for every different types of built environments, make little benefits to some types of built environments. Looking toward to enable LID to strength the link with urban design to reduce the runoff in coping urban flooding, the research consider different characteristics of different types of built environments in developing LID strategy. Classify the built environments by doing the cluster analysis based on density measures, such as Ground Space Index (GSI), Floor Space Index (FSI), Floors (L), and Open Space Ratio (OSR), and analyze their impervious surface rates and runoff volumes. Simulate flood situations by using quasi-two-dimensional flood plain flow model, and evaluate the flood mitigation effectiveness of different types of built environments in different low-impact development strategies. The information from the results of the assessment can be more precisely implement in urban design. In addition, it helps to enact regulations of low-Impact development strategies in urban design more suitable for every different type of built environments.Keywords: low-impact development, urban design, flooding, density measures
Procedia PDF Downloads 334547 Exploring the Link between Intangible Capital and Urban Economic Development: The Case of Three UK Core Cities
Authors: Melissa Dickinson
Abstract:
In the context of intense global competitiveness and urban transformations, today’s cities are faced with enormous challenges. There is increasing pressure among cities and regions to respond promptly and efficiently to fierce market progressions, to offer a competitive advantage, higher flexibility, and to be pro-active in creating future markets. Consequently, competition among cities and regions within the dynamics of a worldwide spatial economic system is growing fiercer, amplifying the importance of intangible capital in shaping the competitive and dynamic economic performance of organisations and firms. Accordingly, this study addresses how intangible capital influences urban economic development within an urban environment. Despite substantial research on the economic, and strategic determinants of urban economic development this multidimensional phenomenon remains to be one of the greatest challenges for economic geographers. The research provides a unique contribution, exploring intangible capital through the lenses of entrepreneurial capital and social-network capital. Drawing on business surveys and in-depth interviews with key stakeholders in the case of the three UK Core Cities Birmingham, Bristol and Cardiff. This paper critically considers how entrepreneurial capital and social-network capital is a crucial source of competitiveness and urban economic development. This paper deals with questions concerning the complexity of operationalizing ‘network capital’ in different urban settings and the challenges that reside in characterising its effects. The paper will highlight the role of institutions in facilitating urban economic development. Particular emphasis will be placed on exploring the roles formal and informal institutions have in delivering, supporting and nurturing entrepreneurial capital and social-network capital, to facilitate urban economic development. Discussions will then consider how institutions moderate and contribute to the economic development of urban areas, to provide implications in terms of future policy formulation in the context of large and medium sized cities.Keywords: urban economic development, network capital, entrepreneurialism, institutions
Procedia PDF Downloads 276546 Topology Enhancement of a Straight Fin Using a Porous Media Computational Fluid Dynamics Simulation Approach
Authors: S. Wakim, M. Nemer, B. Zeghondy, B. Ghannam, C. Bouallou
Abstract:
Designing the optimal heat exchanger is still an essential objective to be achieved. Parametrical optimization involves the evaluation of the heat exchanger dimensions to find those that best satisfy certain objectives. This method contributes to an enhanced design rather than an optimized one. On the contrary, topology optimization finds the optimal structure that satisfies the design objectives. The huge development in metal additive manufacturing allowed topology optimization to find its way into engineering applications especially in the aerospace field to optimize metal structures. Using topology optimization in 3d heat and mass transfer problems requires huge computational time, therefore coupling it with CFD simulations can reduce this it. However, existed CFD models cannot be coupled with topology optimization. The CFD model must allow creating a uniform mesh despite the initial geometry complexity and also to swap the cells from fluid to solid and vice versa. In this paper, a porous media approach compatible with topology optimization criteria is developed. It consists of modeling the fluid region of the heat exchanger as porous media having high porosity and similarly the solid region is modeled as porous media having low porosity. The switching from fluid to solid cells required by topology optimization is simply done by changing each cell porosity using a user defined function. This model is tested on a plate and fin heat exchanger and validated by comparing its results to experimental data and simulations results. Furthermore, this model is used to perform a material reallocation based on local criteria to optimize a plate and fin heat exchanger under a constant heat duty constraint. The optimized fin uses 20% fewer materials than the first while the pressure drop is reduced by about 13%.Keywords: computational methods, finite element method, heat exchanger, porous media, topology optimization
Procedia PDF Downloads 154545 Organic Facies Classification, Distribution, and Their Geochemical Characteristics in Sirt Basin, Libya
Authors: Khaled Albriki, Feiyu Wang
Abstract:
The failed rifted epicratonic Sirt basin is located in the northern margin of the African Plate with an area of approximately 600,000 km2. The organofacies' classification, characterization, and its distribution vertically and horizontally are carried out in 7 main troughs with 32 typical selected wells. 7 geological and geochemical cross sections including Rock-Eval data and % TOC data are considered in order to analyze and to characterize the main organofacies with respect to their geochemical and geological controls and also to remove the ambiguity behind the complexity of the orgnofacies types and distributions in the basin troughs from where the oil and gas are generated and migrated. This study confirmes that there are four different classical types of organofacies distributed in Sirt basin F, D/E, C, and B. these four clasical types of organofacies controls the type and amount of the hydrocarbon discovered in Sirt basin. Oil bulk property data from more than 20 oil and gas fields indicate that D/E organoface are significant oil and gas contributors similar to B organoface. In the western Sirt basin in Zallah-Dur Al Abd, Hagfa, Kotla, and Dur Atallha troughs, F organoface is identified for Etel formation, Kalash formation and Hagfa formation having % TOC < 0.6, whereas the good quality D/E and B organofacies present in Rachmat formation and Sirte shale formation both have % TOC > 1.1. Results from the deepest trough (Ajdabiya), Etel (Gas pron in Whadyat trough), Kalash, and Hagfa constitute F organofacies, mainly. The Rachmat and Sirt shale both have D/E to B organofacies with % TOC > 1.2, thus indicating the best organofacies quality in Ajdabiya trough. In Maragh trough, results show that Etel F organofacies and D/E, C to B organofacies related to Middle Nubian, Rachmat, and Sirte shale have %TOC > 0.66. Towards the eastern Sirt basin, in troughs (Hameimat, Faregh, and Sarir), results show that the Middle Nubian, Etel, Rachmat, and Sirte shales are strongly dominated by D/E, C to B (% TOC > 0.75) organofacies.Keywords: Etel, Mid-Nubian, organic facies, Rachmat, Sirt basin, Sirte shale
Procedia PDF Downloads 128544 Design and Implementation of 3kVA Grid-Tied Transformerless Power Inverter for Solar Photovoltaic Application
Authors: Daniel O. Johnson, Abiodun A. Ogunseye, Aaron Aransiola, Majors Samuel
Abstract:
Power Inverter is a very important device in renewable energy use particularly for solar photovoltaic power application because it is the effective interface between the DC power generator and the load or the grid. Transformerless inverter is getting more and more preferred to the power converter with galvanic isolation transformer and may eventually supplant it. Transformerless inverter offers advantages of improved DC to AC conversion and power delivery efficiency; and reduced system cost, weight and complexity. This work presents thorough analysis of the design and prototyping of 3KVA grid-tie transformerless inverter. The inverter employs electronic switching method with minimised heat generation in the system and operates based on the principle of pulse-width modulation (PWM). The design is such that it can take two inputs, one from PV arrays and the other from Battery Energy Storage BES and addresses the safety challenge of leakage current. The inverter system was designed around microcontroller system, modeled with Proteus® software for simulation and testing of the viability of the designed inverter circuit. The firmware governing the operation of the grid-tied inverter is written in C language and was developed using MicroC software by Mikroelectronica® for writing sine wave signal code for synchronization to the grid. The simulation results show that the designed inverter circuit performs excellently with very high efficiency, good quality sinusoidal output waveform, negligible harmonics and gives very stable performance under voltage variation from 36VDC to 60VDC input. The prototype confirmed the simulated results and was successfully synchronized with the utility supply. The comprehensive analyses of the circuit design, the prototype and explanation on overall performance will be presented.Keywords: grid-tied inverter, leakage current, photovoltaic system, power electronic, transformerless inverter
Procedia PDF Downloads 291543 Seal Capacity Evaluation by Using Mercury Injection Capillary Pressure Method Integrated with Petrographic Data: A Case Study in Green Dragon Oilfield Offshore Vietnam
Authors: Quoc Ngoc Phan, Hieu Van Nguyen, Minh Hong Nguyen
Abstract:
This study presents an integrated approach using Mercury Injection Capillary Pressure (MICP) and petrographic analysis to assess the seal quality of the inter-bedded shale formations which are considered the intra-formation top seals of hydrocarbon bearing zones in Green Dragon structure. Based on the hydrocarbon column height (HCH) at leak point derived from capillary pressure data, four seal types were identified. Furthermore, the results of scanning electron microscopy (SEM) and X-ray diffraction (XRD) analysis were interpreted to clarify the influence of clay minerals on seal capacity. The result of the study indicated that the inter-bedded shale formations are the good sealing quality with a majority of analyzed samples ranked type A and B seals in the sample set. Both seal types occurred mainly in mudstones with pore radius estimated less than 0.251 µm. Overall, type A and B seals contained a large amount of authigenic clay minerals such as illite, chlorite which showed the complexity of morphological arrangement in pore space. Conversely, the least common seal type C and D were presented in moderately compacted sandstones with more open pore radius. It is noticeable that there was a reduction of illite and chlorite in clay mineral fraction of these seal type. It is expected that the integrated analysis approach using Mercury Injection Capillary Pressure and petrographic data employed in this study can be applied to assess the sealing quality of future well sites in Green Dragon or other structures.Keywords: seal capacity, hydrocarbon height column, seal type, SEM, XRD
Procedia PDF Downloads 160