Search results for: intelligent computational techniques
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8877

Search results for: intelligent computational techniques

5727 Well Inventory Data Entry: Utilization of Developed Technologies to Progress the Integrated Asset Plan

Authors: Danah Al-Selahi, Sulaiman Al-Ghunaim, Bashayer Sadiq, Fatma Al-Otaibi, Ali Ameen

Abstract:

In light of recent changes affecting the Oil & Gas Industry, optimization measures have become imperative for all companies globally, including Kuwait Oil Company (KOC). To keep abreast of the dynamic market, a detailed Integrated Asset Plan (IAP) was developed to drive optimization across the organization, which was facilitated through the in-house developed software “Well Inventory Data Entry” (WIDE). This comprehensive and integrated approach enabled centralization of all planned asset components for better well planning, enhancement of performance, and to facilitate continuous improvement through performance tracking and midterm forecasting. Traditionally, this was hard to achieve as, in the past, various legacy methods were used. This paper briefly describes the methods successfully adopted to meet the company’s objective. IAPs were initially designed using computerized spreadsheets. However, as data captured became more complex and the number of stakeholders requiring and updating this information grew, the need to automate the conventional spreadsheets became apparent. WIDE, existing in other aspects of the company (namely, the Workover Optimization project), was utilized to meet the dynamic requirements of the IAP cycle. With the growth of extensive features to enhance the planning process, the tool evolved into a centralized data-hub for all asset-groups and technical support functions to analyze and infer from, leading WIDE to become the reference two-year operational plan for the entire company. To achieve WIDE’s goal of operational efficiency, asset-groups continuously add their parameters in a series of predefined workflows that enable the creation of a structured process which allows risk factors to be flagged and helps mitigation of the same. This tool dictates assigned responsibilities for all stakeholders in a method that enables continuous updates for daily performance measures and operational use. The reliable availability of WIDE, combined with its user-friendliness and easy accessibility, created a platform of cross-functionality amongst all asset-groups and technical support groups to update contents of their respective planning parameters. The home-grown entity was implemented across the entire company and tailored to feed in internal processes of several stakeholders across the company. Furthermore, the implementation of change management and root cause analysis techniques captured the dysfunctionality of previous plans, which in turn resulted in the improvement of already existing mechanisms of planning within the IAP. The detailed elucidation of the 2 year plan flagged any upcoming risks and shortfalls foreseen in the plan. All results were translated into a series of developments that propelled the tool’s capabilities beyond planning and into operations (such as Asset Production Forecasts, setting KPIs, and estimating operational needs). This process exemplifies the ability and reach of applying advanced development techniques to seamlessly integrated the planning parameters of various assets and technical support groups. These techniques enables the enhancement of integrating planning data workflows that ultimately lay the founding plans towards an epoch of accuracy and reliability. As such, benchmarks of establishing a set of standard goals are created to ensure the constant improvement of the efficiency of the entire planning and operational structure.

Keywords: automation, integration, value, communication

Procedia PDF Downloads 135
5726 Hierarchical Zeolites as Catalysts for Cyclohexene Epoxidation Reactions

Authors: Agnieszka Feliczak-Guzik, Paulina Szczyglewska, Izabela Nowak

Abstract:

A catalyst-assisted oxidation reaction is one of the key reactions exploited by various industries. Their conductivity yields essential compounds and intermediates, such as alcohols, epoxides, aldehydes, ketones, and organic acids. Researchers are devoting more and more attention to developing active and selective materials that find application in many catalytic reactions, such as cyclohexene epoxidation. This reaction yields 1,2-epoxycyclohexane and 1,2-diols as the main products. These compounds are widely used as intermediates in the perfume industry and synthesizing drugs and lubricants. Hence, our research aimed to use hierarchical zeolites modified with transition metal ions, e.g., Nb, V, and Ta, in the epoxidation reaction of cyclohexene using microwaveheating. Hierarchical zeolites are materials with secondary porosity, mainly in the mesoporous range, compared to microporous zeolites. In the course of the research, materials based on two commercial zeolites, with Faujasite (FAU) and Zeolite Socony Mobil-5 (ZSM-5) structures, were synthesized and characterized by various techniques, such as X-ray diffraction (XRD), transmission electron microscopy (TEM), scanning electron microscopy (SEM), and low-temperature nitrogen adsorption/desorption isotherms. The materials obtained were then used in a cyclohexene epoxidation reaction, which was carried out as follows: catalyst (0.02 g), cyclohexene (0.1 cm3), acetonitrile (5 cm3) and dihydrogen peroxide (0.085 cm3) were placed in a suitable glass reaction vessel with a magnetic stirrer inside in a microwave reactor. Reactions were carried out at 45° C for 6 h (samples were taken every 1 h). The reaction mixtures were filtered to separate the liquid products from the solid catalyst and then transferred to 1.5 cm3 vials for chromatographic analysis. The test techniques confirmed the acquisition of additional secondary porosity while preserving the structure of the commercial zeolite (XRD and low-temperature nitrogen adsorption/desorption isotherms). The results of the activity of the hierarchical catalyst modified with niobium in the cyclohexene epoxidation reaction indicate that the conversion of cyclohexene, after 6 h of running the process, is about 70%. As the main product of the reaction, 2-cyclohexanediol was obtained (selectivity > 80%). In addition to the mentioned product, adipic acid, cyclohexanol, cyclohex-2-en-1-one, and 1,2-epoxycyclohexane were also obtained. Furthermore, in a blank test, no cyclohexene conversion was obtained after 6 h of reaction. Acknowledgments The work was carried out within the project “Advanced biocomposites for tomorrow’s economy BIOG-NET,” funded by the Foundation for Polish Science from the European Regional Development Fund (POIR.04.04.00-00-1792/18-00.

Keywords: epoxidation, oxidation reactions, hierarchical zeolites, synthesis

Procedia PDF Downloads 68
5725 Analyzing a Tourism System by Bifurcation Theory

Authors: Amin Behradfar

Abstract:

‎Tourism has a direct impact on the national revenue for all touristic countries. It creates work opportunities‎, ‎industries‎, ‎and several investments to serve and raise nations performance and cultures. ‎This paper is devoted to analyze dynamical behaviour of a four-dimensional non-linear tourism-based social-ecological system by using the codimension two bifurcation theory‎. ‎In fact we investigate the cusp bifurcation of that‎. ‎Implications of our mathematical results to the tourism‎ ‎industry are discussed‎. Moreover, profitability‎, ‎compatibility and sustainability of the tourism system are shown by the aid of cusp bifurcation and numerical techniques‎.

Keywords: tourism-based social-ecological dynamical systems, cusp bifurcation, center manifold theory, profitability, ‎compatibility, sustainability

Procedia PDF Downloads 496
5724 Foundation Settlement Determination: A Simplified Approach

Authors: Adewoyin O. Olusegun, Emmanuel O. Joshua, Marvel L. Akinyemi

Abstract:

The heterogeneous nature of the subsurface requires the use of factual information to deal with rather than assumptions or generalized equations. Therefore, there is need to determine the actual rate of settlement possible in the soil before structures are built on it. This information will help in determining the type of foundation design and the kind of reinforcement that will be necessary in constructions. This paper presents a simplified and a faster approach for determining foundation settlement in any type of soil using real field data acquired from seismic refraction techniques and cone penetration tests. This approach was also able to determine the depth of settlement of each strata of soil. The results obtained revealed the different settlement time and depth of settlement possible.

Keywords: heterogeneous, settlement, foundation, seismic, technique

Procedia PDF Downloads 433
5723 The Risk of Ground Movements After Digging Two Parallel Vertical Tunnel in Urban

Authors: Djelloul Chafia, Demagh Rafik, Kareche Toufik

Abstract:

Human activities, made without precautions, accelerate the degradation of the soil structure and reduces its resistance. Operations, such as tunnel construction may exercise an influence more or less permanent on the grounds which surrounded them, these structures alter soil it is necessary to predict their impacts by suitable measures. This research is a numerical analysis that deals the risks and effects due to the weakening of the soil after digging two parallel vertical circular tunnels in urban areas, and suggests forecasting techniques based essentially on the organization of underground space. The simulations are performed using the finite-difference code FLAC in a two-dimensional case and with an elasto-plastic behavior of the soil.

Keywords: sol, weakening, degradation, prevention, tunnel

Procedia PDF Downloads 550
5722 Survey on Big Data Stream Classification by Decision Tree

Authors: Mansoureh Ghiasabadi Farahani, Samira Kalantary, Sara Taghi-Pour, Mahboubeh Shamsi

Abstract:

Nowadays, the development of computers technology and its recent applications provide access to new types of data, which have not been considered by the traditional data analysts. Two particularly interesting characteristics of such data sets include their huge size and streaming nature .Incremental learning techniques have been used extensively to address the data stream classification problem. This paper presents a concise survey on the obstacles and the requirements issues classifying data streams with using decision tree. The most important issue is to maintain a balance between accuracy and efficiency, the algorithm should provide good classification performance with a reasonable time response.

Keywords: big data, data streams, classification, decision tree

Procedia PDF Downloads 508
5721 A Numerical Computational Method of MRI Static Magnetic Field for an Ergonomic Facility Design Guidelines

Authors: Sherine Farrag

Abstract:

Magnetic resonance imaging (MRI) presents safety hazards, with the general physical environment. The principal hazard of the MRI is the presence of static magnetic fields. Proper architectural design of MRI’s room ensure environment and health care staff safety. This research paper presents an easy approach for numerical computation of fringe static magnetic fields. Iso-gauss line of different MR intensities (0.3, 0.5, 1, 1.5 Tesla) was mapped and a polynomial function of the 7th degree was generated and tested. Matlab script was successfully applied for MRI SMF mapping. This method can be valid for any kind of commercial scanner because it requires only the knowledge of the MR scanner room map with iso-gauss lines. Results help to develop guidelines to guide healthcare architects to design of a safer Magnetic resonance imaging suite.

Keywords: designing MRI suite, MRI safety, radiology occupational exposure, static magnetic fields

Procedia PDF Downloads 479
5720 Optimizing Machine Vision System Setup Accuracy by Six-Sigma DMAIC Approach

Authors: Joseph C. Chen

Abstract:

Machine vision system provides automatic inspection to reduce manufacturing costs considerably. However, only a few principles have been found to optimize machine vision system and help it function more accurately in industrial practice. Mostly, there were complicated and impractical design techniques to improve the accuracy of machine vision system. This paper discusses implementing the Six Sigma Define, Measure, Analyze, Improve, and Control (DMAIC) approach to optimize the setup parameters of machine vision system when it is used as a direct measurement technique. This research follows a case study showing how Six Sigma DMAIC methodology has been put into use.

Keywords: DMAIC, machine vision system, process capability, Taguchi Parameter Design

Procedia PDF Downloads 420
5719 Theoretical Study on the Visible-Light-Induced Radical Coupling Reactions Mediated by Charge Transfer Complex

Authors: Lishuang Ma

Abstract:

Charge transfer (CT) complex, also known as Electron donor-acceptor (EDA) complex, has received attentions increasingly in the field of synthetic chemistry community, due to the CT complex can absorb the visible light through the intermolecular charge transfer excited states, various of catalyst-free photochemical transformations under mild visible-light conditions. However, a number of fundamental questions are still ambiguous, such as the origin of visible light absorption, the photochemical and photophysical properties of the CT complex, as well as the detailed mechanism of the radical coupling pathways mediated by CT complex. Since these are critical factors for target-specific design and synthesis of more new-type CT complexes. To this end, theoretical investigations were performed in our group to answer these questions based on multiconfigurational perturbation theory. The photo-induced fluoroalkylation reactions are mediated by CT complexes, which are formed by the association of an acceptor of perfluoroalkyl halides RF−X (X = Br, I) and a suitable donor molecule such as β-naphtholate anion, were chosen as a paradigm example in this work. First, spectrum simulations were carried out by both CASPT2//CASSCF/PCM and TD-DFT/PCM methods. The computational results showed that the broadening spectra in visible light range (360-550nm) of the CT complexes originate from the 1(σπ*) excitation, accompanied by an intermolecular electron transfer, which was also found closely related to the aggregate states of the donor and acceptor. Moreover, from charge translocation analysis, the CT complex that showed larger charge transfer in the round state would exhibit smaller charge transfer in excited stated of 1(σπ*), causing blue shift relatively. Then, the excited-state potential energy surface (PES) was calculated at CASPT2//CASSCF(12,10)/ PCM level of theory to explore the photophysical properties of the CT complexes. The photo-induced C-X (X=I, Br) bond cleavage was found to occur in the triplet state, which is accessible through a fast intersystem crossing (ISC) process that is controlled by the strong spin-orbit coupling resulting from the heavy iodine and bromine atoms. Importantly, this rapid fragmentation process can compete and suppress the backward electron transfer (BET) event, facilitating the subsequent effective photochemical transformations. Finally, the reaction pathways of the radical coupling were also inspected, which showed that the radical chain propagation pathway could easy to accomplish with a small energy barrier no more than 3.0 kcal/mol, which is the key factor that promote the efficiency of the photochemical reactions induced by CT complexes. In conclusion, theoretical investigations were performed to explore the photophysical and photochemical properties of the CT complexes, as well as the mechanism of radical coupling reactions mediated by CT complex. The computational results and findings in this work can provide some critical insights into mechanism-based design for more new-type EDA complexes

Keywords: charge transfer complex, electron transfer, multiconfigurational perturbation theory, radical coupling

Procedia PDF Downloads 135
5718 Characterization of Particle Charge from Aerosol Generation Process: Impact on Infrared Signatures and Material Reactivity

Authors: Erin M. Durke, Monica L. McEntee, Meilu He, Suresh Dhaniyala

Abstract:

Aerosols are one of the most important and significant surfaces in the atmosphere. They can influence weather, absorption, and reflection of light, and reactivity of atmospheric constituents. A notable feature of aerosol particles is the presence of a surface charge, a characteristic imparted via the aerosolization process. The existence of charge can complicate the interrogation of aerosol particles, so many researchers remove or neutralize aerosol particles before characterization. However, the charge is present in real-world samples, and likely has an effect on the physical and chemical properties of an aerosolized material. In our studies, we aerosolized different materials in an attempt to characterize the charge imparted via the aerosolization process and determine what impact it has on the aerosolized materials’ properties. The metal oxides, TiO₂ and SiO₂, were aerosolized expulsively and then characterized, using several different techniques, in an effort to determine the surface charge imparted upon the particles via the aerosolization process. Particle charge distribution measurements were conducted via the employment of a custom scanning mobility particle sizer. The results of the charge distribution measurements indicated that expulsive generation of 0.2 µm SiO₂ particles produced aerosols with upwards of 30+ charges on the surface of the particle. Determination of the degree of surface charging led to the use of non-traditional techniques to explore the impact of additional surface charge on the overall reactivity of the metal oxides, specifically TiO₂. TiO₂ was aerosolized, again expulsively, onto a gold-coated tungsten mesh, which was then evaluated with transmission infrared spectroscopy in an ultra-high vacuum environment. The TiO₂ aerosols were exposed to O₂, H₂, and CO, respectively. Exposure to O₂ resulted in a decrease in the overall baseline of the aerosol spectrum, suggesting O₂ removed some of the surface charge imparted during aerosolization. Upon exposure to H₂, there was no observable rise in the baseline of the IR spectrum, as is typically seen for TiO₂, due to the population of electrons into the shallow trapped states and subsequent promotion of the electrons into the conduction band. This result suggests that the additional charge imparted via aerosolization fills the trapped states, therefore no rise is seen upon exposure to H₂. Dosing the TiO₂ aerosols with CO showed no adsorption of CO on the surface, even at lower temperatures (~100 K), indicating the additional charge on the aerosol surface prevents the CO molecules from adsorbing to the TiO₂ surface. The results observed during exposure suggest that the additional charge imparted via aerosolization impacts the interaction with each probe gas.

Keywords: aerosols, charge, reactivity, infrared

Procedia PDF Downloads 116
5717 Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images

Authors: Elham Bagheri, Yalda Mohsenzadeh

Abstract:

Image memorability refers to the phenomenon where certain images are more likely to be remembered by humans than others. It is a quantifiable and intrinsic attribute of an image. Understanding how visual perception and memory interact is important in both cognitive science and artificial intelligence. It reveals the complex processes that support human cognition and helps to improve machine learning algorithms by mimicking the brain's efficient data processing and storage mechanisms. To explore the computational underpinnings of image memorability, this study examines the relationship between an image's reconstruction error, distinctiveness in latent space, and its memorability score. A trained autoencoder is used to replicate human-like memorability assessment inspired by the visual memory game employed in memorability estimations. This study leverages a VGG-based autoencoder that is pre-trained on the vast ImageNet dataset, enabling it to recognize patterns and features that are common to a wide and diverse range of images. An empirical analysis is conducted using the MemCat dataset, which includes 10,000 images from five broad categories: animals, sports, food, landscapes, and vehicles, along with their corresponding memorability scores. The memorability score assigned to each image represents the probability of that image being remembered by participants after a single exposure. The autoencoder is finetuned for one epoch with a batch size of one, attempting to create a scenario similar to human memorability experiments where memorability is quantified by the likelihood of an image being remembered after being seen only once. The reconstruction error, which is quantified as the difference between the original and reconstructed images, serves as a measure of how well the autoencoder has learned to represent the data. The reconstruction error of each image, the error reduction, and its distinctiveness in latent space are calculated and correlated with the memorability score. Distinctiveness is measured as the Euclidean distance between each image's latent representation and its nearest neighbor within the autoencoder's latent space. Different structural and perceptual loss functions are considered to quantify the reconstruction error. The results indicate that there is a strong correlation between the reconstruction error and the distinctiveness of images and their memorability scores. This suggests that images with more unique distinct features that challenge the autoencoder's compressive capacities are inherently more memorable. There is also a negative correlation between the reduction in reconstruction error compared to the autoencoder pre-trained on ImageNet, which suggests that highly memorable images are harder to reconstruct, probably due to having features that are more difficult to learn by the autoencoder. These insights suggest a new pathway for evaluating image memorability, which could potentially impact industries reliant on visual content and mark a step forward in merging the fields of artificial intelligence and cognitive science. The current research opens avenues for utilizing neural representations as instruments for understanding and predicting visual memory.

Keywords: autoencoder, computational vision, image memorability, image reconstruction, memory retention, reconstruction error, visual perception

Procedia PDF Downloads 68
5716 Methods to Measure the Quality of 2D Image Compression Techniques

Authors: Mohammed H. Rasheed, Hussein Nadhem Fadhel, Mohammed M. Siddeq

Abstract:

In this paper we suggested image quality measuring metrics tools that can provide an accurate and close to the perceived quality sense of the tested images. Such tools give metrics that can be used to compare the performance of image compression algorithms. In this paper, two new metrics to measure the quality of decompressed images are proposed. The metric measurement based on combined data (CD) between an originals and decompressed images. Compared with other e.g., PSNR and RMSE, the proposed metrics gives values with the closest reflection of image quality perception by the human eye.

Keywords: RMSE, PSNR, image quality metrics, image compression

Procedia PDF Downloads 11
5715 Pre-Shared Key Distribution Algorithms' Attacks for Body Area Networks: A Survey

Authors: Priti Kumari, Tricha Anjali

Abstract:

Body Area Networks (BANs) have emerged as the most promising technology for pervasive health care applications. Since they facilitate communication of very sensitive health data, information leakage in such networks can put human life at risk, and hence security inside BANs is a critical issue. Safe distribution and periodic refreshment of cryptographic keys are needed to ensure the highest level of security. In this paper, we focus on the key distribution techniques and how they are categorized for BAN. The state-of-art pre-shared key distribution algorithms are surveyed. Possible attacks on algorithms are demonstrated with examples.

Keywords: attacks, body area network, key distribution, key refreshment, pre-shared keys

Procedia PDF Downloads 351
5714 Identification of Suitable Sites for Rainwater Harvesting in Salt Water Intruded Area by Using Geospatial Techniques in Jafrabad, Amreli District, India

Authors: Pandurang Balwant, Ashutosh Mishra, Jyothi V., Abhay Soni, Padmakar C., Rafat Quamar, Ramesh J.

Abstract:

The sea water intrusion in the coastal aquifers has become one of the major environmental concerns. Although, it is a natural phenomenon but, it can be induced with anthropogenic activities like excessive exploitation of groundwater, seacoast mining, etc. The geological and hydrogeological conditions including groundwater heads and groundwater pumping pattern in the coastal areas also influence the magnitude of seawater intrusion. However, this problem can be remediated by taking some preventive measures like rainwater harvesting and artificial recharge. The present study is an attempt to identify suitable sites for rainwater harvesting in salt intrusion affected area near coastal aquifer of Jafrabad town, Amreli district, Gujrat, India. The physico-chemical water quality results show that out of 25 groundwater samples collected from the study area most of samples were found to contain high concentration of Total Dissolved Solids (TDS) with major fractions of Na and Cl ions. The Cl/HCO3 ratio was also found greater than 1 which indicates the salt water contamination in the study area. The geophysical survey was conducted at nine sites within the study area to explore the extent of contamination of sea water. From the inverted resistivity sections, low resistivity zone (<3 Ohm m) associated with seawater contamination were demarcated in North block pit and south block pit of NCJW mines, Mitiyala village Lotpur and Lunsapur village at the depth of 33 m, 12 m, 40 m, 37 m, 24 m respectively. Geospatial techniques in combination of Analytical Hierarchy Process (AHP) considering hydrogeological factors, geographical features, drainage pattern, water quality and geophysical results for the study area were exploited to identify potential zones for the Rainwater Harvesting. Rainwater harvesting suitability model was developed in ArcGIS 10.1 software and Rainwater harvesting suitability map for the study area was generated. AHP in combination of the weighted overlay analysis is an appropriate method to identify rainwater harvesting potential zones. The suitability map can be further utilized as a guidance map for the development of rainwater harvesting infrastructures in the study area for either artificial groundwater recharge facilities or for direct use of harvested rainwater.

Keywords: analytical hierarchy process, groundwater quality, rainwater harvesting, seawater intrusion

Procedia PDF Downloads 162
5713 Hydrodynamic Analysis with Heat Transfer in Solid Gas Fluidized Bed Reactor for Solar Thermal Applications

Authors: Sam Rasoulzadeh, Atefeh Mousavi

Abstract:

Fluidized bed reactors are known as highly exothermic and endothermic according to uniformity in temperature as a safe and effective mean for catalytic reactors. In these reactors, a wide range of catalyst particles can be used and by using a continuous operation proceed to produce in succession. Providing optimal conditions for the operation of these types of reactors will prevent the exorbitant costs necessary to carry out laboratory work. In this regard, a hydrodynamic analysis was carried out with heat transfer in the solid-gas fluidized bed reactor for solar thermal applications. The results showed that in the fluid flow the input of the reactor has a lower temperature than the outlet, and when the fluid is passing from the reactor, the heat transfer happens between cylinder and solar panel and fluid. It increases the fluid temperature in the outlet pump and also the kinetic energy of the fluid has been raised in the outlet areas.

Keywords: heat transfer, solar reactor, fluidized bed reactor, CFD, computational fluid dynamics

Procedia PDF Downloads 164
5712 Nanoparticles Modification by Grafting Strategies for the Development of Hybrid Nanocomposites

Authors: Irati Barandiaran, Xabier Velasco-Iza, Galder Kortaberria

Abstract:

Hybrid inorganic/organic nanostructured materials based on block copolymers are of considerable interest in the field of Nanotechnology, taking into account that these nanocomposites combine the properties of polymer matrix and the unique properties of the added nanoparticles. The use of block copolymers as templates offers the opportunity to control the size and the distribution of inorganic nanoparticles. This research is focused on the surface modification of inorganic nanoparticles to reach a good interface between nanoparticles and polymer matrices which hinders the nanoparticle aggregation. The aim of this work is to obtain a good and selective dispersion of Fe3O4 magnetic nanoparticles into different types of block copolymers such us, poly(styrene-b-methyl methacrylate) (PS-b-PMMA), poly(styrene-b-ε-caprolactone) (PS-b-PCL) poly(isoprene-b-methyl methacrylate) (PI-b-PMMA) or poly(styrene-b-butadiene-b-methyl methacrylate) (SBM) by using different grafting strategies. Fe3O4 magnetic nanoparticles have been surface-modified with polymer or block copolymer brushes following different grafting methods (grafting to, grafting from and grafting through) to achieve a selective location of nanoparticles into desired domains of the block copolymers. Morphology of fabricated hybrid nanocomposites was studied by means of atomic force microscopy (AFM) and with the aim to reach well-ordered nanostructured composites different annealing methods were used. Additionally, nanoparticle amount has been also varied in order to investigate the effect of the nanoparticle content in the morphology of the block copolymer. Nowadays different characterization methods were using in order to investigate magnetic properties of nanometer-scale electronic devices. Particularly, two different techniques have been used with the aim of characterizing synthesized nanocomposites. First, magnetic force microscopy (MFM) was used to investigate qualitatively the magnetic properties taking into account that this technique allows distinguishing magnetic domains on the sample surface. On the other hand, magnetic characterization by vibrating sample magnetometer and superconducting quantum interference device. This technique demonstrated that magnetic properties of nanoparticles have been transferred to the nanocomposites, exhibiting superparamagnetic behavior similar to that of the maghemite nanoparticles at room temperature. Obtained advanced nanostructured materials could found possible applications in the field of dye-sensitized solar cells and electronic nanodevices.

Keywords: atomic force microscopy, block copolymers, grafting techniques, iron oxide nanoparticles

Procedia PDF Downloads 250
5711 Thermal Stress and Computational Fluid Dynamics Analysis of Coatings for High-Temperature Corrosion

Authors: Ali Kadir, O. Anwar Beg

Abstract:

Thermal barrier coatings are among the most popular methods for providing corrosion protection in high temperature applications including aircraft engine systems, external spacecraft structures, rocket chambers etc. Many different materials are available for such coatings, of which ceramics generally perform the best. Motivated by these applications, the current investigation presents detailed finite element simulations of coating stress analysis for a 3- dimensional, 3-layered model of a test sample representing a typical gas turbine component scenario. Structural steel is selected for the main inner layer, Titanium (Ti) alloy for the middle layer and Silicon Carbide (SiC) for the outermost layer. The model dimensions are 20 mm (width), 10 mm (height) and three 1mm deep layers. ANSYS software is employed to conduct three types of analysis- static structural, thermal stress analysis and also computational fluid dynamic erosion/corrosion analysis (via ANSYS FLUENT). The specified geometry which corresponds to corrosion test samples exactly is discretized using a body-sizing meshing approach, comprising mainly of tetrahedron cells. Refinements were concentrated at the connection points between the layers to shift the focus towards the static effects dissipated between them. A detailed grid independence study is conducted to confirm the accuracy of the selected mesh densities. To recreate gas turbine scenarios; in the stress analysis simulations, static loading and thermal environment conditions of up to 1000 N and 1000 degrees Kelvin are imposed. The default solver was used to set the controls for the simulation with the fixed support being set as one side of the model while subjecting the opposite side to a tabular force of 500 and 1000 Newtons. Equivalent elastic strain, total deformation, equivalent stress and strain energy were computed for all cases. Each analysis was duplicated twice to remove one of the layers each time, to allow testing of the static and thermal effects with each of the coatings. ANSYS FLUENT simulation was conducted to study the effect of corrosion on the model under similar thermal conditions. The momentum and energy equations were solved and the viscous heating option was applied to represent improved thermal physics of heat transfer between the layers of the structures. A Discrete Phase Model (DPM) in ANSYS FLUENT was employed which allows for the injection of continuous uniform air particles onto the model, thereby enabling an option for calculating the corrosion factor caused by hot air injection (particles prescribed 5 m/s velocity and 1273.15 K). Extensive visualization of results is provided. The simulations reveal interesting features associated with coating response to realistic gas turbine loading conditions including significantly different stress concentrations with different coatings.

Keywords: thermal coating, corrosion, ANSYS FEA, CFD

Procedia PDF Downloads 132
5710 Determination of Myocardial Function Using Heart Accumulated Radiopharmaceuticals

Authors: C. C .D. Kulathilake, M. Jayatilake, T. Takahashi

Abstract:

The myocardium is composed of specialized muscle which relies mainly on fatty acid and sugar metabolism and it is widely contribute to the heart functioning. The changes of the cardiac energy-producing system during heart failure have been proved using autoradiography techniques. This study focused on evaluating sugar and fatty acid metabolism in myocardium as cardiac energy getting system using heart-accumulated radiopharmaceuticals. Two sets of autoradiographs of heart cross sections of Lewis male rats were analyzed and the time- accumulation curve obtained with use of the MATLAB image processing software to evaluate fatty acid and sugar metabolic functions.

Keywords: autoradiographs, fatty acid, radiopharmaceuticals, sugar

Procedia PDF Downloads 443
5709 Shakespeare's Hamlet in Ballet: Transformation of an Archival Recording of a Neoclassical Ballet Performance into a Contemporary Transmodern Dance Video Applying Postmodern Concepts and Techniques

Authors: Svebor Secak

Abstract:

This four-year artistic research project hosted by the University of New England, Australia has set the goal to experiment with non-conventional ways of presenting a language-based narrative in dance using insights of recent theoretical writing on performance, addressing the research question: How to transform an archival recording of a neoclassical ballet performance into a new artistic dance video by implementing postmodern philosophical concepts? The Creative Practice component takes the form of a dance video Hamlet Revisited which is a reworking of the archival recording of the neoclassical ballet Hamlet, augmented by new material, produced using resources, technicians and dancers of the Croatian National Theatre in Zagreb. The methodology for the creation of Hamlet Revisited consisted of extensive field and desk research after which three dancers were shown the recording of original Hamlet and then created their artistic response to it based on their reception and appreciation of it. The dancers responded differently, based upon their diverse dancing backgrounds and life experiences. They began in the role of the audience observing video of the original ballet and transformed into the role of the choreographer-performer. Their newly recorded material was edited and juxtaposed with the archival recording of Hamlet and other relevant footage, allowing for postmodern features such as aleatoric content, synchronicity, eclecticism and serendipity, that way establishing communication on a receptive reader-response basis, thus blending the roles of the choreographer, performer and spectator, creating an original work of art whose significance lies in the relationship and communication between styles, old and new choreographic approaches, artists and audiences and the transformation of their traditional roles and relationships. In editing and collating, the following techniques were used with the intention to avoid the singular narrative: fragmentation, repetition, reverse-motion, multiplication of images, split screen, overlaying X-rays, image scratching, slow-motion, freeze-frame and simultaneity. Key postmodern concepts considered were: deconstruction, diffuse authorship, supplementation, simulacrum, self-reflexivity, questioning the role of the author, intertextuality and incredulity toward grand narratives - departing from the original story, thus personalising its ontological themes. From a broad brush of diverse concepts and techniques applied in an almost prescriptive manner, the project focuses on intertextuality that proves to be valid on at least two levels. The first is the possibility of a more objective analysis in combination with a semiotic structuralist approach moving from strict relationships between signs to a multiplication of signifiers, considering the dance text as an open construction, containing the elusive and enigmatic quality of art that leaves the interpretive position open. The second one is the creation of the new work where the author functions as the editor, aware and conscious of the interplay of disparate texts and their sources which co-act in the mind during the creative process. It is argued here that the eclectic combination of the old and new material through constant oscillations of different discourses upon the same topic resulted in a transmodern integrationist recent work of art that might be applied as a model for reconsidering existing choreographic creations.

Keywords: Ballet Hamlet, intertextuality, transformation, transmodern dance video

Procedia PDF Downloads 246
5708 Cubical Representation of Prime and Essential Prime Implicants of Boolean Functions

Authors: Saurabh Rawat, Anushree Sah

Abstract:

K Maps are generally and ideally, thought to be simplest form for obtaining solution of Boolean equations. Cubical Representation of Boolean equations is an alternate pick to incur a solution, otherwise to be meted out with Truth Tables, Boolean Laws, and different traits of Karnaugh Maps. Largest possible k- cubes that exist for a given function are equivalent to its prime implicants. A technique of minimization of Logic functions is tried to be achieved through cubical methods. The main purpose is to make aware and utilise the advantages of cubical techniques in minimization of Logic functions. All this is done with an aim to achieve minimal cost solution.r

Keywords: K-maps, don’t care conditions, Boolean equations, cubes

Procedia PDF Downloads 378
5707 Use of Interpretable Evolved Search Query Classifiers for Sinhala Documents

Authors: Prasanna Haddela

Abstract:

Document analysis is a well matured yet still active research field, partly as a result of the intricate nature of building computational tools but also due to the inherent problems arising from the variety and complexity of human languages. Breaking down language barriers is vital in enabling access to a number of recent technologies. This paper investigates the application of document classification methods to new Sinhalese datasets. This language is geographically isolated and rich with many of its own unique features. We will examine the interpretability of the classification models with a particular focus on the use of evolved Lucene search queries generated using a Genetic Algorithm (GA) as a method of document classification. We will compare the accuracy and interpretability of these search queries with other popular classifiers. The results are promising and are roughly in line with previous work on English language datasets.

Keywords: evolved search queries, Sinhala document classification, Lucene Sinhala analyzer, interpretable text classification, genetic algorithm

Procedia PDF Downloads 104
5706 An Efficient Acquisition Algorithm for Long Pseudo-Random Sequence

Authors: Wan-Hsin Hsieh, Chieh-Fu Chang, Ming-Seng Kao

Abstract:

In this paper, a novel method termed the Phase Coherence Acquisition (PCA) is proposed for pseudo-random (PN) sequence acquisition. By employing complex phasors, the PCA requires only complex additions in the order of N, the length of the sequence, whereas the conventional method utilizing fast Fourier transform (FFT) requires complex multiplications and additions both in the order of Nlog2N . In order to combat noise, the input and local sequences are partitioned and mapped into complex phasors in PCA. The phase differences between pairs of input and local phasors are utilized for acquisition, and thus complex multiplications are avoided. For more noise-robustness capability, the multi-layer PCA is developed to extract the code phase step by step. The significant reduction of computational loads makes the PCA an attractive method, especially when the sequence length of is extremely large which becomes intractable for the FFT-based acquisition.

Keywords: FFT, PCA, PN sequence, convolution theory

Procedia PDF Downloads 471
5705 A Quantitative Evaluation of Text Feature Selection Methods

Authors: B. S. Harish, M. B. Revanasiddappa

Abstract:

Due to rapid growth of text documents in digital form, automated text classification has become an important research in the last two decades. The major challenge of text document representations are high dimension, sparsity, volume and semantics. Since the terms are only features that can be found in documents, selection of good terms (features) plays an very important role. In text classification, feature selection is a strategy that can be used to improve classification effectiveness, computational efficiency and accuracy. In this paper, we present a quantitative analysis of most widely used feature selection (FS) methods, viz. Term Frequency-Inverse Document Frequency (tfidf ), Mutual Information (MI), Information Gain (IG), CHISquare (x2), Term Frequency-Relevance Frequency (tfrf ), Term Strength (TS), Ambiguity Measure (AM) and Symbolic Feature Selection (SFS) to classify text documents. We evaluated all the feature selection methods on standard datasets like 20 Newsgroups, 4 University dataset and Reuters-21578.

Keywords: classifiers, feature selection, text classification

Procedia PDF Downloads 450
5704 Mixed Integer Programming-Based One-Class Classification Method for Process Monitoring

Authors: Younghoon Kim, Seoung Bum Kim

Abstract:

One-class classification plays an important role in detecting outlier and abnormality from normal observations. In the previous research, several attempts were made to extend the scope of application of the one-class classification techniques to statistical process control problems. For most previous approaches, such as support vector data description (SVDD) control chart, the design of the control limits is commonly based on the assumption that the proportion of abnormal observations is approximately equal to an expected Type I error rate in Phase I process. Because of the limitation of the one-class classification techniques based on convex optimization, we cannot make the proportion of abnormal observations exactly equal to expected Type I error rate: controlling Type I error rate requires to optimize constraints with integer decision variables, but convex optimization cannot satisfy the requirement. This limitation would be undesirable in theoretical and practical perspective to construct effective control charts. In this work, to address the limitation of previous approaches, we propose the one-class classification algorithm based on the mixed integer programming technique, which can solve problems formulated with continuous and integer decision variables. The proposed method minimizes the radius of a spherically shaped boundary subject to the number of normal data to be equal to a constant value specified by users. By modifying this constant value, users can exactly control the proportion of normal data described by the spherically shaped boundary. Thus, the proportion of abnormal observations can be made theoretically equal to an expected Type I error rate in Phase I process. Moreover, analogous to SVDD, the boundary can be made to describe complex structures by using some kernel functions. New multivariate control chart applying the effectiveness of the algorithm is proposed. This chart uses a monitoring statistic to characterize the degree of being an abnormal point as obtained through the proposed one-class classification. The control limit of the proposed chart is established by the radius of the boundary. The usefulness of the proposed method was demonstrated through experiments with simulated and real process data from a thin film transistor-liquid crystal display.

Keywords: control chart, mixed integer programming, one-class classification, support vector data description

Procedia PDF Downloads 167
5703 The Corrosion Resistance of the 32CrMoV13 Steel Nitriding

Authors: Okba Belahssen, Lazhar Torchane, Said Benramache, Abdelouahed Chala

Abstract:

This paper presents corrosion behavior of the plasma-nitrided 32CrMoV13 steel. Different kinds of samples were tested: non-treated, plasma nitrided samples. The structure of layers was determined by X-ray diffraction, while the morphology was observed by scanning electron microscopy (SEM). The corrosion behavior was evaluated by electrochemical techniques (potentiodynamic curves and electrochemical impedance spectroscopy). The corrosion tests were carried out in acid chloride solution (HCl 1M). Experimental results showed that the nitrides ε-Fe2−3N and γ′-Fe4N present in the white layer are nobler than the substrate but may promote, by galvanic effect, a localized corrosion through open porosity. The better corrosion protection was observed for nitrided sample.

Keywords: plasma-nitrided, 32CrMoV13 steel, corrosion, EIS

Procedia PDF Downloads 577
5702 A Parallel Poromechanics Finite Element Method (FEM) Model for Reservoir Analyses

Authors: Henrique C. C. Andrade, Ana Beatriz C. G. Silva, Fernando Luiz B. Ribeiro, Samir Maghous, Jose Claudio F. Telles, Eduardo M. R. Fairbairn

Abstract:

The present paper aims at developing a parallel computational model for numerical simulation of poromechanics analyses of heterogeneous reservoirs. In the context of macroscopic poroelastoplasticity, the hydromechanical coupling between the skeleton deformation and the fluid pressure is addressed by means of two constitutive equations. The first state equation relates the stress to skeleton strain and pore pressure, while the second state equation relates the Lagrangian porosity change to skeleton volume strain and pore pressure. A specific algorithm for local plastic integration using a tangent operator is devised. A modified Cam-clay type yield surface with associated plastic flow rule is adopted to account for both contractive and dilative behavior.

Keywords: finite element method, poromechanics, poroplasticity, reservoir analysis

Procedia PDF Downloads 379
5701 Predicting Costs in Construction Projects with Machine Learning: A Detailed Study Based on Activity-Level Data

Authors: Soheila Sadeghi

Abstract:

Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.

Keywords: cost prediction, machine learning, project management, random forest, neural networks

Procedia PDF Downloads 29
5700 Practical Challenges of Tunable Parameters in Matlab/Simulink Code Generation

Authors: Ebrahim Shayesteh, Nikolaos Styliaras, Alin George Raducu, Ozan Sahin, Daniel Pombo VáZquez, Jonas Funkquist, Sotirios Thanopoulos

Abstract:

One of the important requirements in many code generation projects is defining some of the model parameters tunable. This helps to update the model parameters without performing the code generation again. This paper studies the concept of embedded code generation by MATLAB/Simulink coder targeting the TwinCAT Simulink system. The generated runtime modules are then tested and deployed to the TwinCAT 3 engineering environment. However, defining the parameters tunable in MATLAB/Simulink code generation targeting TwinCAT is not very straightforward. This paper focuses on this subject and reviews some of the techniques tested here to make the parameters tunable in generated runtime modules. Three techniques are proposed for this purpose, including normal tunable parameters, callback functions, and mask subsystems. Moreover, some test Simulink models are developed and used to evaluate the results of proposed approaches. A brief summary of the study results is presented in the following. First of all, the parameters defined tunable and used in defining the values of other Simulink elements (e.g., gain value of a gain block) could be changed after the code generation and this value updating will affect the values of all elements defined based on the values of the tunable parameter. For instance, if parameter K=1 is defined as a tunable parameter in the code generation process and this parameter is used to gain a gain block in Simulink, the gain value for the gain block is equal to 1 in the gain block TwinCAT environment after the code generation. But, the value of K can be changed to a new value (e.g., K=2) in TwinCAT (without doing any new code generation in MATLAB). Then, the gain value of the gain block will change to 2. Secondly, adding a callback function in the form of “pre-load function,” “post-load function,” “start function,” and will not help to make the parameters tunable without performing a new code generation. This means that any MATLAB files should be run before performing the code generation. The parameters defined/calculated in this file will be used as fixed values in the generated code. Thus, adding these files as callback functions to the Simulink model will not make these parameters flexible since the MATLAB files will not be attached to the generated code. Therefore, to change the parameters defined/calculated in these files, the code generation should be done again. However, adding these files as callback functions forces MATLAB to run them before the code generation, and there is no need to define the parameters mentioned in these files separately. Finally, using a tunable parameter in defining/calculating the values of other parameters through the mask is an efficient method to change the value of the latter parameters after the code generation. For instance, if tunable parameter K is used in calculating the value of two other parameters K1 and K2 and, after the code generation, the value of K is updated in TwinCAT environment, the value of parameters K1 and K2 will also be updated (without any new code generation).

Keywords: code generation, MATLAB, tunable parameters, TwinCAT

Procedia PDF Downloads 218
5699 Restoration of Steppes in Algeria: Case of the Stipa tenacissima L. Steppe

Authors: H. Kadi-Hanifi, F. Amghar

Abstract:

Steppes of arid Mediterranean zones are deeply threatened by desertification. To stop or alleviate ecological and economic problems associated with this desertification, management actions have been implemented since the last three decades. The struggle against desertification has become a national priority in many countries. In Algeria, several management techniques have been used to cope with desertification. This study aims at investigating the effect of exclosure on floristic diversity and chemical soil proprieties after four years of implementation. 167 phyto-ecological samples have been studied, 122 inside the exclosure and 45 outside. Results showed that plant diversity, composition, vegetation cover, pastoral value and soil fertility were significantly higher in protected areas.

Keywords: Algeria, arid, desertification, pastoral management, soil fertility

Procedia PDF Downloads 182
5698 Just a Heads Up: Approach to Head Shape Abnormalities

Authors: Noreen Pulte

Abstract:

Prior to the 'Back to Sleep' Campaign in 1992, 1 of every 300 infants seen by Advanced Practice Providers had plagiocephaly. Insufficient attention is given to plagiocephaly and brachycephaly diagnoses in practice and pediatric education. In this talk, Nurse Practitioners and Pediatric Providers will be able to: (1) identify red flags associated with head shape abnormalities, (2) learn techniques they can teach parents to prevent head shape abnormalities, and (3) differentiate between plagiocephaly, brachycephaly, and craniosynostosis. The presenter is a Primary Care Pediatric Nurse Practitioner at Ann & Robert H. Lurie Children's Hospital of Chicago and the primary provider for its head shape abnormality clinics. She will help participants translate key information obtained from birth history, review of systems, and developmental history to understand risk factors for head shape abnormalities and progression of deformities. Synostotic and non-synostotic head shapes will be explained to help participants differentiate plagiocephaly and brachycephaly from synostotic head shapes. This knowledge is critical for the prompt referral of infants with craniosynostosis for surgical evaluation and correction. Rapid referral for craniosynostosis can possibly direct the patient to a minimally invasive surgical procedure versus a craniectomy. As for plagiocephaly and brachycephaly, this timely referral can also aid in a physical therapy referral if necessitated, which treats torticollis and aids in improving head shape. A well-timed referral to a head shape clinic can possibly eliminate the need for a helmet and/or minimize the time in a helmet. Practitioners will learn the importance of obtaining head measurements using calipers. The presenter will explain head calculations and how the calculations are interpreted to determine the severity of the head shape abnormalities. Severity defines the treatment plan. Participants will learn when to refer patients to a head shape abnormality clinic and techniques they should teach parents to perform while waiting for the referral appointment. The purpose, mechanics, and logistics of helmet therapy, including optimal time to initiate helmet therapy, recommended helmet wear-time, and tips for helmet therapy compliance, will be described. Case scenarios will be incorporated into the presenter's presentation to support learning. The salient points of the case studies will be explained and discussed. Practitioners will be able to immediately translate the knowledge and skills gained in this presentation into their clinical practice.

Keywords: plagiocephaly, brachycephaly, craniosynostosis, red flags

Procedia PDF Downloads 88