Search results for: timestamp-based sliding window model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17319

Search results for: timestamp-based sliding window model

17019 Energy Conservation Strategies of Buildings in Hot, Arid Region: Al-Khobar, Saudi Arabia

Authors: M. H. Shwehdi, S. Raja Mohammad

Abstract:

Recently energy savings have become more pronounced as a result of the world financial crises as well the unstable oil prices. Certainly all entities needs to adapt Energy Conservation and Management Strategies due to high monthly consumption of their spread locations and advancements of its telecom systems. These system improvements necessitate the establishment of more exchange centers as well provide energy savings. This paper investigates the impact of HVAC System Characteristics, Operational Strategies, the impact of Envelope Thermal Characteristics, and energy conservation measures. These are classified under three types of measures i.e. Zero-Investment; Low-Investment and High-Investment Energy Conservation Measures. The study shows that the Energy Conservation Measures (ECMs) pertaining to the HVAC system characteristics and operation represent the highest potential for energy reduction, attention should be given to window thermal and solar radiation characteristics when large window areas are used. The type of glazing system needs to be carefully considered in the early design phase of future buildings. Paper will present the thermal optimization of different size centers in the two hot-dry and hot-humid Saudi Arabian city of Al Khobar, East province.

Keywords: energy conservation, optimization, thermal design, intermittent operation, exchange centers, hot-humid climate, Saudi Arabia

Procedia PDF Downloads 448
17018 Fractal Behaviour of Earthquake Sequences in Himalaya

Authors: Kamal, Adil Ahmad

Abstract:

Earthquakes are among the most versatile natural and dynamic processes, and hence a fractal model is considered to be the best representative of the same. We present a novel method to process and analyse information hidden in earthquake sequences using Fractal Dimensions and Iterative Function Systems (IFS). Spatial and temporal variations in the fractal dimensions of seismicity observed around the Indian peninsula in last 30 years are studied. This was used as a possible precursor before large earthquakes in the region. IFS images for observed seismicity in the Himalayan belt were also obtained. We scan the whole data set and coarse grain of a selected window to reduce it to four bins. A critical analysis of four-cornered chaos-game clearly shows that the spatial variation in earthquake occurrences in Himalayan range is not random. Two subzones of Himalaya have a tendency to follow each other in time.

Keywords: earthquakes, fractals, Himalaya, iterated function systems

Procedia PDF Downloads 298
17017 A Multilayer Perceptron Neural Network Model Optimized by Genetic Algorithm for Significant Wave Height Prediction

Authors: Luis C. Parra

Abstract:

The significant wave height prediction is an issue of great interest in the field of coastal activities because of the non-linear behavior of the wave height and its complexity of prediction. This study aims to present a machine learning model to forecast the significant wave height of the oceanographic wave measuring buoys anchored at Mooloolaba of the Queensland Government Data. Modeling was performed by a multilayer perceptron neural network-genetic algorithm (GA-MLP), considering Relu(x) as the activation function of the MLPNN. The GA is in charge of optimized the MLPNN hyperparameters (learning rate, hidden layers, neurons, and activation functions) and wrapper feature selection for the window width size. Results are assessed using Mean Square Error (MSE), Root Mean Square Error (RMSE), and Mean Absolute Error (MAE). The GAMLPNN algorithm was performed with a population size of thirty individuals for eight generations for the prediction optimization of 5 steps forward, obtaining a performance evaluation of 0.00104 MSE, 0.03222 RMSE, 0.02338 MAE, and 0.71163% of MAPE. The results of the analysis suggest that the MLPNNGA model is effective in predicting significant wave height in a one-step forecast with distant time windows, presenting 0.00014 MSE, 0.01180 RMSE, 0.00912 MAE, and 0.52500% of MAPE with 0.99940 of correlation factor. The GA-MLP algorithm was compared with the ARIMA forecasting model, presenting better performance criteria in all performance criteria, validating the potential of this algorithm.

Keywords: significant wave height, machine learning optimization, multilayer perceptron neural networks, evolutionary algorithms

Procedia PDF Downloads 106
17016 Language Use in Autobiographical Memory Transcripts as a Window into Attachment Style and Personality

Authors: McKenzie S. Braley, Lesley Jessiman

Abstract:

If language reveals internal psychological processing, then it is also likely that language use in autobiographical memory transcripts may be used as a window into attachment style and related personality features. The current study, therefore, examined the possible associations between attachment style, negative affectivity, social inhibition, and linguistic features extracted from autobiographical memory transcripts. Young adult participants (n = 61) filled out attachment and personality questionnaires, and orally reported a relationship-related memory. Memories were audio-recorded and later transcribed verbatim. Using a computerized linguistic extraction tool, positive affect words, negative affect words, and cognition words were extracted. Spearman’s rank correlation coefficients revealed that attachment anxiety was negatively correlated with cognition words (r2 = -0.26, p = 0.047) and that negative affectivity was negatively correlated with positive affect words (r2 = -0.32, p = 0.012). The findings suggest that attachment style and personality are associated with speech styles indicative of both emotionality and depth of processing. Because attachment styles, negative affectivity, and social inhibition are associated with poor mental health outcomes, analyses of key linguistics features in autobiographical memory narratives may provide reliable screening tools for mental wellbeing.

Keywords: attachment style, autobiographical memory, language, negative affectivity, social inhibition

Procedia PDF Downloads 270
17015 Additive Friction Stir Manufacturing Process: Interest in Understanding Thermal Phenomena and Numerical Modeling of the Temperature Rise Phase

Authors: Antoine Lauvray, Fabien Poulhaon, Pierre Michaud, Pierre Joyot, Emmanuel Duc

Abstract:

Additive Friction Stir Manufacturing (AFSM) is a new industrial process that follows the emergence of friction-based processes. The AFSM process is a solid-state additive process using the energy produced by the friction at the interface between a rotating non-consumable tool and a substrate. Friction depends on various parameters like axial force, rotation speed or friction coefficient. The feeder material is a metallic rod that flows through a hole in the tool. Unlike in Friction Stir Welding (FSW) where abundant literature exists and addresses many aspects going from process implementation to characterization and modeling, there are still few research works focusing on AFSM. Therefore, there is still a lack of understanding of the physical phenomena taking place during the process. This research work aims at a better AFSM process understanding and implementation, thanks to numerical simulation and experimental validation performed on a prototype effector. Such an approach is considered a promising way for studying the influence of the process parameters and to finally identify a process window that seems relevant. The deposition of material through the AFSM process takes place in several phases. In chronological order these phases are the docking phase, the dwell time phase, the deposition phase, and the removal phase. The present work focuses on the dwell time phase that enables the temperature rise of the system composed of the tool, the filler material, and the substrate and due to pure friction. Analytic modeling of heat generation based on friction considers as main parameters the rotational speed and the contact pressure. Another parameter considered influential is the friction coefficient assumed to be variable due to the self-lubrication of the system with the rise in temperature or the materials in contact roughness smoothing over time. This study proposes, through numerical modeling followed by experimental validation, to question the influence of the various input parameters on the dwell time phase. Rotation speed, temperature, spindle torque, and axial force are the main monitored parameters during experimentations and serve as reference data for the calibration of the numerical model. This research shows that the geometry of the tool as well as fluctuations of the input parameters like axial force and rotational speed are very influential on the temperature reached and/or the time required to reach the targeted temperature. The main outcome is the prediction of a process window which is a key result for a more efficient process implementation.

Keywords: numerical model, additive manufacturing, friction, process

Procedia PDF Downloads 145
17014 Multiaxial Fatigue in Thermal Elastohydrodynamic Lubricated Contacts with Asperities and Slip

Authors: Carl-Magnus Everitt, Bo Alfredsson

Abstract:

Contact mechanics and tribology have been combined with fundamental fatigue and fracture mechanics to form the asperity mechanism which supplies an explanation for the surface-initiated rolling contact fatigue damage, called pitting or spalling. The cracks causing the pits initiates at one surface point and thereafter they slowly grow into the material before chipping of a material piece to form the pit. In the current study, the lubrication aspects on fatigue initiation are simulated by passing a single asperity through a thermal elastohydrodynamic lubricated, TEHL, contact. The physics of the lubricant was described with Reynolds equation and the lubricants pressure-viscosity relation was modeled by Roelands equation, formulated to include temperature dependence. A pressure dependent shear limit was incorporated. To capture the full phenomena of the sliding contact the temperature field was resolved through the incorporation of the energy flow. The heat was mainly generated due to shearing of the lubricant and from dry friction where metal contact occurred. The heat was then transported, and conducted, away by the solids and the lubricant. The fatigue damage caused by the asperities was evaluated through Findley’s fatigue criterion. The results show that asperities, in the size of surface roughness found in applications, may cause surface initiated fatigue damage and crack initiation. The simulations also show that the asperities broke through the lubricant in the inlet, causing metal to metal contact with high friction. When the asperities thereafter moved through the contact, the sliding provided the asperities with lubricant releasing the metal contact. The release of metal contact was possible due to the high viscosity the lubricant obtained from the high pressure. The metal contact in the inlet caused higher friction which increased the risk of fatigue damage. Since the metal contact occurred in the inlet it increased the fatigue risk more for asperities subjected to negative slip than positive slip. Therefore the fatigue evaluations showed that the asperities subjected to negative slip yielded higher fatigue stresses than the asperities subjected to positive slip of equal magnitude. This is one explanation for why pitting is more common in the dedendum than the addendum on pinion gear teeth. The simulations produced further validation for the asperity mechanism by showing that asperities cause surface initiated fatigue and crack initiation.

Keywords: fatigue, rolling, sliding, thermal elastohydrodynamic

Procedia PDF Downloads 120
17013 An Overview of Bioinformatics Methods to Detect Novel Riboswitches Highlighting the Importance of Structure Consideration

Authors: Danny Barash

Abstract:

Riboswitches are RNA genetic control elements that were originally discovered in bacteria and provide a unique mechanism of gene regulation. They work without the participation of proteins and are believed to represent ancient regulatory systems in the evolutionary timescale. One of the biggest challenges in riboswitch research is that many are found in prokaryotes but only a small percentage of known riboswitches have been found in certain eukaryotic organisms. The few examples of eukaryotic riboswitches were identified using sequence-based bioinformatics search methods that include some slight structural considerations. These pattern-matching methods were the first ones to be applied for the purpose of riboswitch detection and they can also be programmed very efficiently using a data structure called affix arrays, making them suitable for genome-wide searches of riboswitch patterns. However, they are limited by their ability to detect harder to find riboswitches that deviate from the known patterns. Several methods have been developed since then to tackle this problem. The most commonly used by practitioners is Infernal that relies on Hidden Markov Models (HMMs) and Covariance Models (CMs). Profile Hidden Markov Models were also carried out in the pHMM Riboswitch Scanner web application, independently from Infernal. Other computational approaches that have been developed include RMDetect by the use of 3D structural modules and RNAbor that utilizes Boltzmann probability of structural neighbors. We have tried to incorporate more sophisticated secondary structure considerations based on RNA folding prediction using several strategies. The first idea was to utilize window-based methods in conjunction with folding predictions by energy minimization. The moving window approach is heavily geared towards secondary structure consideration relative to sequence that is treated as a constraint. However, the method cannot be used genome-wide due to its high cost because each folding prediction by energy minimization in the moving window is computationally expensive, enabling to scan only at the vicinity of genes of interest. The second idea was to remedy the inefficiency of the previous approach by constructing a pipeline that consists of inverse RNA folding considering RNA secondary structure, followed by a BLAST search that is sequence-based and highly efficient. This approach, which relies on inverse RNA folding in general and our own in-house fragment-based inverse RNA folding program called RNAfbinv in particular, shows capability to find attractive candidates that are missed by Infernal and other standard methods being used for riboswitch detection. We demonstrate attractive candidates found by both the moving-window approach and the inverse RNA folding approach performed together with BLAST. We conclude that structure-based methods like the two strategies outlined above hold considerable promise in detecting riboswitches and other conserved RNAs of functional importance in a variety of organisms.

Keywords: riboswitches, RNA folding prediction, RNA structure, structure-based methods

Procedia PDF Downloads 234
17012 A Homogenized Mechanical Model of Carbon Nanotubes/Polymer Composite with Interface Debonding

Authors: Wenya Shu, Ilinca Stanciulescu

Abstract:

Carbon nanotubes (CNTs) possess attractive properties, such as high stiffness and strength, and high thermal and electrical conductivities, making them promising filler in multifunctional nanocomposites. Although CNTs can be efficient reinforcements, the expected level of mechanical performance of CNT-polymers is not often reached in practice due to the poor mechanical behavior of the CNT-polymer interfaces. It is believed that the interactions of CNT and polymer mainly result from the Van der Waals force. The interface debonding is a fracture and delamination phenomenon. Thus, the cohesive zone modeling (CZM) is deemed to give good capture of the interface behavior. The detailed, cohesive zone modeling provides an option to consider the CNT-matrix interactions, but brings difficulties in mesh generation and also leads to high computational costs. Homogenized models that smear the fibers in the ground matrix and treat the material as homogeneous are studied in many researches to simplify simulations. But based on the perfect interface assumption, the traditional homogenized model obtained by mixing rules severely overestimates the stiffness of the composite, even comparing with the result of the CZM with artificially very strong interface. A mechanical model that can take into account the interface debonding and achieve comparable accuracy to the CZM is thus essential. The present study first investigates the CNT-matrix interactions by employing cohesive zone modeling. Three different coupled CZM laws, i.e., bilinear, exponential and polynomial, are considered. These studies indicate that the shapes of the CZM constitutive laws chosen do not influence significantly the simulations of interface debonding. Assuming a bilinear traction-separation relationship, the debonding process of single CNT in the matrix is divided into three phases and described by differential equations. The analytical solutions corresponding to these phases are derived. A homogenized model is then developed by introducing a parameter characterizing interface sliding into the mixing theory. The proposed mechanical model is implemented in FEAP8.5 as a user material. The accuracy and limitations of the model are discussed through several numerical examples. The CZM simulations in this study reveal important factors in the modeling of CNT-matrix interactions. The analytical solutions and proposed homogenized model provide alternative methods to efficiently investigate the mechanical behaviors of CNT/polymer composites.

Keywords: carbon nanotube, cohesive zone modeling, homogenized model, interface debonding

Procedia PDF Downloads 129
17011 Adaptation of Projection Profile Algorithm for Skewed Handwritten Text Line Detection

Authors: Kayode A. Olaniyi, Tola. M. Osifeko, Adeola A. Ogunleye

Abstract:

Text line segmentation is an important step in document image processing. It represents a labeling process that assigns the same label using distance metric probability to spatially aligned units. Text line detection techniques have successfully been implemented mainly in printed documents. However, processing of the handwritten texts especially unconstrained documents has remained a key problem. This is because the unconstrained hand-written text lines are often not uniformly skewed. The spaces between text lines may not be obvious, complicated by the nature of handwriting and, overlapping ascenders and/or descenders of some characters. Hence, text lines detection and segmentation represents a leading challenge in handwritten document image processing. Text line detection methods that rely on the traditional global projection profile of the text document cannot efficiently confront with the problem of variable skew angles between different text lines. Hence, the formulation of a horizontal line as a separator is often not efficient. This paper presents a technique to segment a handwritten document into distinct lines of text. The proposed algorithm starts, by partitioning the initial text image into columns, across its width into chunks of about 5% each. At each vertical strip of 5%, the histogram of horizontal runs is projected. We have worked with the assumption that text appearing in a single strip is almost parallel to each other. The algorithm developed provides a sliding window through the first vertical strip on the left side of the page. It runs through to identify the new minimum corresponding to a valley in the projection profile. Each valley would represent the starting point of the orientation line and the ending point is the minimum point on the projection profile of the next vertical strip. The derived text-lines traverse around any obstructing handwritten vertical strips of connected component by associating it to either the line above or below. A decision of associating such connected component is made by the probability obtained from a distance metric decision. The technique outperforms the global projection profile for text line segmentation and it is robust to handle skewed documents and those with lines running into each other.

Keywords: connected-component, projection-profile, segmentation, text-line

Procedia PDF Downloads 122
17010 Petroleum Generative Potential of Eocene-Paleocene Sequences of Potwar Basin, Pakistan

Authors: Syed Bilawal Ali Shah

Abstract:

The investigation of the hydrocarbon source rock potential of Eocene-Paleocene formations of Potwar Basin, part of Upper Indus Basin Pakistan, was done using geochemical and petrological techniques. Analysis was performed on forty-five core-cutting samples from two wells. The sequences analysed are Sakesar, Lockhart and Patala formations of Potwar Basin. Patala Formation is one of Potwar Basin's major petroleum-bearing source rocks. The Lockhart Formation samples VR (%Ro) and Tmax data indicate that the formation is early mature to immature for petroleum generation for hydrocarbon generation; samples from the Patala and Sakesar formations, however, have a peak oil generation window and an early maturity (oil window). With 3.37 weight percent mean TOC and HI levels up to 498 mg HC/g TOC, the source rock characteristics of the Sakesar and Patala formations generally exhibit good to very strong petroleum generative potential. The majority of sediments representing Lockhart Formation have 1.5 wt.% mean TOC having fair to good potential with HI values ranging between 203-498 mg HC/g TOC. 1. The analysed sediments of all formations possess primarily mixed Type II/III and Type III kerogen. Analysed sediments indicate that both the Sakesar and Patala formations can possess good oil-generation potential and may act as an oil source rock in the Potwar Basin.

Keywords: Potwar Basin, Patala Shale, Rock-Eval pyrolysis, Indus Basin, VR %Ro

Procedia PDF Downloads 85
17009 Logistic Regression Model versus Additive Model for Recurrent Event Data

Authors: Entisar A. Elgmati

Abstract:

Recurrent infant diarrhea is studied using daily data collected in Salvador, Brazil over one year and three months. A logistic regression model is fitted instead of Aalen's additive model using the same covariates that were used in the analysis with the additive model. The model gives reasonably similar results to that using additive regression model. In addition, the problem with the estimated conditional probabilities not being constrained between zero and one in additive model is solved here. Also martingale residuals that have been used to judge the goodness of fit for the additive model are shown to be useful for judging the goodness of fit of the logistic model.

Keywords: additive model, cumulative probabilities, infant diarrhoea, recurrent event

Procedia PDF Downloads 633
17008 A Hybrid Algorithm for Collaborative Transportation Planning among Carriers

Authors: Elham Jelodari Mamaghani, Christian Prins, Haoxun Chen

Abstract:

In this paper, there is concentration on collaborative transportation planning (CTP) among multiple carriers with pickup and delivery requests and time windows. This problem is a vehicle routing problem with constraints from standard vehicle routing problems and new constraints from a real-world application. In the problem, each carrier has a finite number of vehicles, and each request is a pickup and delivery request with time window. Moreover, each carrier has reserved requests, which must be served by itself, whereas its exchangeable requests can be outsourced to and served by other carriers. This collaboration among carriers can help them to reduce total transportation costs. A mixed integer programming model is proposed to the problem. To solve the model, a hybrid algorithm that combines Genetic Algorithm and Simulated Annealing (GASA) is proposed. This algorithm takes advantages of GASA at the same time. After tuning the parameters of the algorithm with the Taguchi method, the experiments are conducted and experimental results are provided for the hybrid algorithm. The results are compared with those obtained by a commercial solver. The comparison indicates that the GASA significantly outperforms the commercial solver.

Keywords: centralized collaborative transportation, collaborative transportation with pickup and delivery, collaborative transportation with time windows, hybrid algorithm of GA and SA

Procedia PDF Downloads 390
17007 Finite Volume Method for Flow Prediction Using Unstructured Meshes

Authors: Juhee Lee, Yongjun Lee

Abstract:

In designing a low-energy-consuming buildings, the heat transfer through a large glass or wall becomes critical. Multiple layers of the window glasses and walls are employed for the high insulation. The gravity driven air flow between window glasses or wall layers is a natural heat convection phenomenon being a key of the heat transfer. For the first step of the natural heat transfer analysis, in this study the development and application of a finite volume method for the numerical computation of viscous incompressible flows is presented. It will become a part of the natural convection analysis with high-order scheme, multi-grid method, and dual-time step in the future. A finite volume method based on a fully-implicit second-order is used to discretize and solve the fluid flow on unstructured grids composed of arbitrary-shaped cells. The integrations of the governing equation are discretised in the finite volume manner using a collocated arrangement of variables. The convergence of the SIMPLE segregated algorithm for the solution of the coupled nonlinear algebraic equations is accelerated by using a sparse matrix solver such as BiCGSTAB. The method used in the present study is verified by applying it to some flows for which either the numerical solution is known or the solution can be obtained using another numerical technique available in the other researches. The accuracy of the method is assessed through the grid refinement.

Keywords: finite volume method, fluid flow, laminar flow, unstructured grid

Procedia PDF Downloads 285
17006 Influence of Counterface and Environmental Conditions on the Lubricity of Multilayer Graphene Coatings Produced on Nickel by Chemical Vapour Deposition

Authors: Iram Zahra

Abstract:

Friction and wear properties of multilayer graphene coatings (MLG) on nickel substrate were investigated at the macroscale, and different failure mechanisms working at the interface of nickel-graphene coatings were evaluated. Multilayer graphene coatings were produced on a nickel substrate using the atmospheric chemical vapour deposition (CVD) technique. Wear tests were performed on the pin-on-disk tribometer apparatus under dry air conditions, and using the saltwater solution, distilled water, and mineral oil lubricants and counterparts used in these wear tests were fabricated of stainless steel, chromium, and silicon nitride. The wear test parameters such as rotational speed, wear track diameter, temperature, relative humidity, and load were 60 rpm, 6 mm, 22˚C, 45%, and 2N, respectively. To analyse the friction and wear behaviour, coefficient of friction (COF) vs time curves were plotted, and the sliding surfaces of the samples and counterparts were examined using the optical microscope. Results indicated that graphene-coated nickel in mineral oil lubrication and dry conditions gave the minimum average value of COP (0.05) and wear track width ( ̴151 µm) against the three different types of counterparts. In contrast, uncoated nickel samples indicated a maximum wear track width ( ̴411 µm) and COF (0.5). Thorough investigation and analysis concluded that graphene-coated samples have two times lower COF and three times lower wear than the bare nickel samples. Furthermore, mechanical failures were significantly lower in the case of graphene-coated nickel. The overall findings suggested that multilayer graphene coatings have drastically decreased wear and friction on nickel substrate at the macroscale under various lubricating conditions and against different counterparts.

Keywords: friction, lubricity, multilayer graphene, sliding, wear

Procedia PDF Downloads 138
17005 The Automatic Transliteration Model of Images of the Book Hamong Tani Using Statistical Approach

Authors: Agustinus Rudatyo Himamunanto, Anastasia Rita Widiarti

Abstract:

Transliteration using Javanese manuscripts is one of methods to preserve and legate the wealth of literature in the past for the present generation in Indonesia. The transliteration manual process commonly requires philologists and takes a relatively long time. The automatic transliteration process is expected to shorten the time so as to help the works of philologists. The preprocessing and segmentation stage firstly done is used to manage the document images, thus obtaining image script units that will compile input document images free from noise and have the similarity in properties in the thickness, size, and slope. The next stage of characteristic extraction is used to find unique characteristics that will distinguish each Javanese script image. One of characteristics that is used in this research is the number of black pixels in each image units. Each image of Java scripts contained in the data training will undergo the same process similar to the input characters. The system testing was performed with the data of the book Hamong Tani. The book Hamong Tani was selected due to its content, age and number of pages. Those were considered sufficient as a model experimental input. Based on the results of random page automatic transliteration process testing, it was determined that the maximum percentage correctness obtained was 81.53%. The percentage of success was obtained in 32x32 pixel input image size with the 5x5 image window. With regard to the results, it can be concluded that the automatic transliteration model offered is relatively good.

Keywords: Javanese script, character recognition, statistical, automatic transliteration

Procedia PDF Downloads 337
17004 Using Geo-Statistical Techniques and Machine Learning Algorithms to Model the Spatiotemporal Heterogeneity of Land Surface Temperature and its Relationship with Land Use Land Cover

Authors: Javed Mallick

Abstract:

In metropolitan areas, rapid changes in land use and land cover (LULC) have ecological and environmental consequences. Saudi Arabia's cities have experienced tremendous urban growth since the 1990s, resulting in urban heat islands, groundwater depletion, air pollution, loss of ecosystem services, and so on. From 1990 to 2020, this study examines the variance and heterogeneity in land surface temperature (LST) caused by LULC changes in Abha-Khamis Mushyet, Saudi Arabia. LULC was mapped using the support vector machine (SVM). The mono-window algorithm was used to calculate the land surface temperature (LST). To identify LST clusters, the local indicator of spatial associations (LISA) model was applied to spatiotemporal LST maps. In addition, the parallel coordinate (PCP) method was used to investigate the relationship between LST clusters and urban biophysical variables as a proxy for LULC. According to LULC maps, urban areas increased by more than 330% between 1990 and 2018. Between 1990 and 2018, built-up areas had an 83.6% transitional probability. Furthermore, between 1990 and 2020, vegetation and agricultural land were converted into built-up areas at a rate of 17.9% and 21.8%, respectively. Uneven LULC changes in built-up areas result in more LST hotspots. LST hotspots were associated with high NDBI but not NDWI or NDVI. This study could assist policymakers in developing mitigation strategies for urban heat islands

Keywords: land use land cover mapping, land surface temperature, support vector machine, LISA model, parallel coordinate plot

Procedia PDF Downloads 75
17003 Design of Robust and Intelligent Controller for Active Removal of Space Debris

Authors: Shabadini Sampath, Jinglang Feng

Abstract:

With huge kinetic energy, space debris poses a major threat to astronauts’ space activities and spacecraft in orbit if a collision happens. The active removal of space debris is required in order to avoid frequent collisions that would occur. In addition, the amount of space debris will increase uncontrollably, posing a threat to the safety of the entire space system. But the safe and reliable removal of large-scale space debris has been a huge challenge to date. While capturing and deorbiting space debris, the space manipulator has to achieve high control precision. However, due to uncertainties and unknown disturbances, there is difficulty in coordinating the control of the space manipulator. To address this challenge, this paper focuses on developing a robust and intelligent control algorithm that controls joint movement and restricts it on the sliding manifold by reducing uncertainties. A neural network adaptive sliding mode controller (NNASMC) is applied with the objective of finding the control law such that the joint motions of the space manipulator follow the given trajectory. A computed torque control (CTC) is an effective motion control strategy that is used in this paper for computing space manipulator arm torque to generate the required motion. Based on the Lyapunov stability theorem, the proposed intelligent controller NNASMC and CTC guarantees the robustness and global asymptotic stability of the closed-loop control system. Finally, the controllers used in the paper are modeled and simulated using MATLAB Simulink. The results are presented to prove the effectiveness of the proposed controller approach.

Keywords: GNC, active removal of space debris, AI controllers, MatLabSimulink

Procedia PDF Downloads 130
17002 Simulation and Analysis of Passive Parameters of Building in eQuest: A Case Study in Istanbul, Turkey

Authors: Mahdiyeh Zafaranchi

Abstract:

With rapid development of urbanization and improvement of living standards in the world, energy consumption and carbon emissions of the building sector are expected to increase in the near future; because of that, energy-saving issues have become more important among the engineers. Besides, the building sector is a major contributor to energy consumption and carbon emissions. The concept of efficient building appeared as a response to the need for reducing energy demand in this sector which has the main purpose of shifting from standard buildings to low-energy buildings. Although energy-saving should happen in all steps of a building during the life cycle (material production, construction, demolition), the main concept of efficient energy building is saving energy during the life expectancy of a building by using passive and active systems, and should not sacrifice comfort and quality to reach these goals. The main aim of this study is to investigate passive strategies (do not need energy consumption or use renewable energy) to achieve energy-efficient buildings. Energy retrofit measures were explored by eQuest software using a case study as a base model. The study investigates predictive accuracy for the major factors like thermal transmittance (U-value) of the material, windows, shading devices, thermal insulation, rate of the exposed envelope, window/wall ration, lighting system in the energy consumption of the building. The base model was located in Istanbul, Turkey. The impact of eight passive parameters on energy consumption had been indicated. After analyzing the base model by eQuest, a final scenario was suggested which had a good energy performance. The results showed a decrease in the U-values of materials, the rate of exposing buildings, and windows had a significant effect on energy consumption. Finally, savings in electric consumption of about 10.5%, and gas consumption by about 8.37% in the suggested model were achieved annually.

Keywords: efficient building, electric and gas consumption, eQuest, Passive parameters

Procedia PDF Downloads 110
17001 Electrical Decomposition of Time Series of Power Consumption

Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats

Abstract:

Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).

Keywords: electrical disaggregation, DTW, general appliance modeling, event detection

Procedia PDF Downloads 75
17000 Maintenance Work Order Management Tool (Desktop & Mobile Solution)

Authors: Haitham Al Rawahi

Abstract:

Oman Electricity Transmission Company (OETC) has implemented Computerized Maintenance Management System (CMMS), which is based on Oracle enterprise asset management model e-AM. This was implemented with cooperation of Nama Shared Services (NSS). CMMS is mainly used to create maintenance work orders with a preconfigured workflow of defined maintenance schedules/plans, required resources, and materials, obtaining shutdown approvals, completing maintenance activities, and closing the work orders. Furthermore, CMMS is also configured with asset failure classifications, asset hierarchy, asset maintenance activities, integration with spare inventories, etc. Since the year 2017, site engineer is working on CMMS by filling-in manually all related maintenance and inspection records on paper forms and then scanning and attaching it in CMMS for further analysis. Site engineer will finalize all paper works at site and then goes back to office to scan and attach it to work order in CMMS. This creates sub tasks for site engineer and makes it very difficult and lengthy process. Also, there is a significant risk for missing or deleted important fields on the paper due to usage of pen to fill the paper. In addition to that, site engineer may take time and days working outside of the office. therefore, OETC has decided to digitize these inspection and maintenance forms in one platform in CMMS, and it can be opened with both functionalities online and offline. The ArcGIS product formats or web-enabled solutions which has ability to access from mobile and desktop devices via arc map modules will be used too. The purpose of interlinking is to setup for maintenance and inspection forms to work orders in e-AM, which the site engineer has daily interactions with. This ArcGIS environment or tool is designed to link with e-AM, so when site engineer opens this application from the site and a window will take him through same ArcGIS. This window opens the maintenance forms and shows the required fields to fill-in and save the work through his mobile application. After saving his work with the availability of network (Off/In) line, notification will trigger to his line manager to review and take further actions (approve/reject/request more information). In this function, the user can see the assigned work orders to his departments as well as chart of all work orders with status. The approver has ability to see the statistics of all work.

Keywords: e-AM, GIS, CMMS, integration

Procedia PDF Downloads 94
16999 Enhancer: An Effective Transformer Architecture for Single Image Super Resolution

Authors: Pitigalage Chamath Chandira Peiris

Abstract:

A widely researched domain in the field of image processing in recent times has been single image super-resolution, which tries to restore a high-resolution image from a single low-resolution image. Many more single image super-resolution efforts have been completed utilizing equally traditional and deep learning methodologies, as well as a variety of other methodologies. Deep learning-based super-resolution methods, in particular, have received significant interest. As of now, the most advanced image restoration approaches are based on convolutional neural networks; nevertheless, only a few efforts have been performed using Transformers, which have demonstrated excellent performance on high-level vision tasks. The effectiveness of CNN-based algorithms in image super-resolution has been impressive. However, these methods cannot completely capture the non-local features of the data. Enhancer is a simple yet powerful Transformer-based approach for enhancing the resolution of images. A method for single image super-resolution was developed in this study, which utilized an efficient and effective transformer design. This proposed architecture makes use of a locally enhanced window transformer block to alleviate the enormous computational load associated with non-overlapping window-based self-attention. Additionally, it incorporates depth-wise convolution in the feed-forward network to enhance its ability to capture local context. This study is assessed by comparing the results obtained for popular datasets to those obtained by other techniques in the domain.

Keywords: single image super resolution, computer vision, vision transformers, image restoration

Procedia PDF Downloads 103
16998 An Assesment of Unconventional Hydrocarbon Potential of the Silurian Dadaş Shales in Diyarbakır Basin, Türkiye

Authors: Ceren Sevimli, Sedat İnan

Abstract:

The Silurian Dadaş Formation within the Diyarbakir Basin in SE Türkiye, like other Silurian shales in North Africa and Middle East, represents a significant prospect for conventional and unconventional hydrocarbon exploration. The Diyarbakır Basin remains relatively underexplored, presenting untapped potential that warrants further investigation. This study focuses on the thermal maturity and hydrocarbon generation histories of the Silurian Dadaş shales, utilizing basin modeling approach. The Dadaş shales are organic-rich and contain mainly Type II kerogen, especially the basal layer contains up to 10 wt. %TOC and thus it is named as “hot shale”. The research integrates geological, geochemical, and basin modeling data to elucidate the unconventional hydrocarbon potential of this formation, which is crucial given the global demand for energy and the need for new resources. The data obtained from previous studies were used to calibrate basin model that has been established by using PetroMod software (Schlumberger). The calibrated model results suggest that Dadaş shales are in oil generation window and that the major episode for thermal maturation and hydrocarbon generation took place prior rot Alpine orogeny (uplift and erosion) The modeling results elucidate the burial history, maturity history, and hydrocarbon production history of the Silurian-aged Dadaş shales, as well as its hydrocarbon content in the area.

Keywords: dadaş formation, diyarbakır basin, silurian hot shale, unconventional hydrocarbon

Procedia PDF Downloads 31
16997 Magneto-Transport of Single Molecular Transistor Using Anderson-Holstein-Caldeira-Leggett Model

Authors: Manasa Kalla, Narasimha Raju Chebrolu, Ashok Chatterjee

Abstract:

We have studied the quantum transport properties of a single molecular transistor in the presence of an external magnetic field using the Keldysh Green function technique. We also used the Anderson-Holstein-Caldeira-Leggett Model to describe the single molecular transistor that consists of a molecular quantum dot (QD) coupled to two metallic leads and placed on a substrate that acts as a heat bath. The phonons are eliminated by the Lang-Firsov transformation and the effective Hamiltonian is used to study the effect of an external magnetic field on the spectral density function, Tunneling Current, Differential Conductance and Spin polarization. A peak in the spectral function corresponds to a possible excitation. In the presence of a magnetic field, the spin-up and spin-down states are degenerate and this degeneracy is lifted by the magnetic field leading to the splitting of the central peak of the spectral function. The tunneling current decreases with increasing magnetic field. We have observed that even the differential conductance peak in the zero magnetic field curve is split in the presence electron-phonon interaction. As the magnetic field is increased, each peak splits into two peaks. And each peak indicates the existence of an energy level. Thus the number of energy levels for transport in the bias window increases with the magnetic field. In the presence of the electron-phonon interaction, Differential Conductance in general gets reduced and decreases faster with the magnetic field. As magnetic field strength increases, the spin polarization of the current is increasing. Our results show that a strongly interacting QD coupled to metallic leads in the presence of external magnetic field parallel to the plane of QD acts as a spin filter at zero temperature.

Keywords: Anderson-Holstein model, Caldeira-Leggett model, spin-polarization, quantum dots

Procedia PDF Downloads 181
16996 Simulation and Thermal Evaluation of Containers Using PCM in Different Weather Conditions of Chile: Energy Savings in Lightweight Constructions

Authors: Paula Marín, Mohammad Saffari, Alvaro de Gracia, Luisa F. Cabeza, Svetlana Ushak

Abstract:

Climate control represents an important issue when referring to energy consumption of buildings and associated expenses, both in installation or operation periods. The climate control of a building relies on several factors. Among them, localization, orientation, architectural elements, sources of energy used, are considered. In order to study the thermal behaviour of a building set up, the present study proposes the use of energy simulation program Energy Plus. In recent years, energy simulation programs have become important tools for evaluation of thermal/energy performance of buildings and facilities. Besides, the need to find new forms of passive conditioning in buildings for energy saving is a critical component. The use of phase change materials (PCMs) for heat storage applications has grown in importance due to its high efficiency. Therefore, the climatic conditions of Northern Chile: high solar radiation and extreme temperature fluctuations ranging from -10°C to 30°C (Calama city), low index of cloudy days during the year are appropriate to take advantage of solar energy and use passive systems in buildings. Also, the extensive mining activities in northern Chile encourage the use of large numbers of containers to harbour workers during shifts. These containers are constructed with lightweight construction systems, requiring heating during night and cooling during day, increasing the HVAC electricity consumption. The use of PCM can improve thermal comfort and reduce the energy consumption. The objective of this study was to evaluate the thermal and energy performance of containers of 2.5×2.5×2.5 m3, located in four cities of Chile: Antofagasta, Calama, Santiago, and Concepción. Lightweight envelopes, typically used in these building prototypes, were evaluated considering a container without PCM inclusion as the reference building and another container with PCM-enhanced envelopes as a test case, both of which have a door and a window in the same wall, orientated in two directions: North and South. To see the thermal response of these containers in different seasons, the simulations were performed considering a period of one year. The results show that higher energy savings for the four cities studied are obtained when the distribution of door and window in the container is in the north direction because of higher solar radiation incidence. The comparison of HVAC consumption and energy savings in % for north direction of door and window are summarised. Simulation results show that in the city of Antofagasta 47% of heating energy could be saved and in the cities of Calama and Concepción the biggest savings in terms of cooling could be achieved since PCM reduces almost all the cooling demand. Currently, based on simulation results, four containers have been constructed and sized with the same structural characteristics carried out in simulations, that are, containers with/without PCM, with door and window in one wall. Two of these containers will be placed in Antofagasta and two containers in a copper mine near to Calama, all of them will be monitored for a period of one year. The simulation results will be validated with experimental measurements and will be reported in the future.

Keywords: energy saving, lightweight construction, PCM, simulation

Procedia PDF Downloads 281
16995 Vulnerability Assessment of Groundwater Quality Deterioration Using PMWIN Model

Authors: A. Shakoor, M. Arshad

Abstract:

The utilization of groundwater resources in irrigation has significantly increased during the last two decades due to constrained canal water supplies. More than 70% of the farmers in the Punjab, Pakistan, depend directly or indirectly on groundwater to meet their crop water demands and hence, an unchecked paradigm shift has resulted in aquifer depletion and deterioration. Therefore, a comprehensive research was carried at central Punjab-Pakistan, regarding spatiotemporal variation in groundwater level and quality. Processing MODFLOW for window (PMWIN) and MT3D (solute transport model) models were used for existing and future prediction of groundwater level and quality till 2030. The comprehensive data set of aquifer lithology, canal network, groundwater level, groundwater salinity, evapotranspiration, groundwater abstraction, recharge etc. were used in PMWIN model development. The model was thus, successfully calibrated and validated with respect to groundwater level for the periods of 2003 to 2007 and 2008 to 2012, respectively. The coefficient of determination (R2) and model efficiency (MEF) for calibration and validation period were calculated as 0.89 and 0.98, respectively, which argued a high level of correlation between the calculated and measured data. For solute transport model (MT3D), the values of advection and dispersion parameters were used. The model used for future scenario up to 2030, by assuming that there would be no uncertain change in climate and groundwater abstraction rate would increase gradually. The model predicted results revealed that the groundwater would decline from 0.0131 to 1.68m/year during 2013 to 2030 and the maximum decline would be on the lower side of the study area, where infrastructure of canal system is very less. This lowering of groundwater level might cause an increase in the tubewell installation and pumping cost. Similarly, the predicted total dissolved solids (TDS) of the groundwater would increase from 6.88 to 69.88mg/L/year during 2013 to 2030 and the maximum increase would be on lower side. It was found that in 2030, the good quality would reduce by 21.4%, while marginal and hazardous quality water increased by 19.28 and 2%, respectively. It was found from the simulated results that the salinity of the study area had increased due to the intrusion of salts. The deterioration of groundwater quality would cause soil salinity and ultimately the reduction in crop productivity. It was concluded from the predicted results of groundwater model that the groundwater deteriorated with the depth of water table i.e. TDS increased with declining groundwater level. It is recommended that agronomic and engineering practices i.e. land leveling, rainwater harvesting, skimming well, ASR (Aquifer Storage and Recovery Wells) etc. should be integrated to meliorate management of groundwater for higher crop production in salt affected soils.

Keywords: groundwater quality, groundwater management, PMWIN, MT3D model

Procedia PDF Downloads 376
16994 Graph-Oriented Summary for Optimized Resource Description Framework Graphs Streams Processing

Authors: Amadou Fall Dia, Maurras Ulbricht Togbe, Aliou Boly, Zakia Kazi Aoul, Elisabeth Metais

Abstract:

Existing RDF (Resource Description Framework) Stream Processing (RSP) systems allow continuous processing of RDF data issued from different application domains such as weather station measuring phenomena, geolocation, IoT applications, drinking water distribution management, and so on. However, processing window phase often expires before finishing the entire session and RSP systems immediately delete data streams after each processed window. Such mechanism does not allow optimized exploitation of the RDF data streams as the most relevant and pertinent information of the data is often not used in a due time and almost impossible to be exploited for further analyzes. It should be better to keep the most informative part of data within streams while minimizing the memory storage space. In this work, we propose an RDF graph summarization system based on an explicit and implicit expressed needs through three main approaches: (1) an approach for user queries (SPARQL) in order to extract their needs and group them into a more global query, (2) an extension of the closeness centrality measure issued from Social Network Analysis (SNA) to determine the most informative parts of the graph and (3) an RDF graph summarization technique combining extracted user query needs and the extended centrality measure. Experiments and evaluations show efficient results in terms of memory space storage and the most expected approximate query results on summarized graphs compared to the source ones.

Keywords: centrality measures, RDF graphs summary, RDF graphs stream, SPARQL query

Procedia PDF Downloads 201
16993 Experimental and Finite Element Analysis for Mechanics of Soil-Tool Interaction

Authors: A. Armin, R. Fotouhi, W. Szyszkowski

Abstract:

In this paper a 3-D finite element (FE) investigation of soil-blade interaction is described. The effects of blade’s shape and rake angle are examined both numerically and experimentally. The soil is considered as an elastic-plastic granular material with non-associated Drucker-Prager material model. Contact elements with different properties are used to mimic soil-blade sliding and soil-soil cutting phenomena. A separation criterion is presented and a procedure to evaluate the forces acting on the blade is given and discussed in detail. Experimental results were derived from tests using soil bin facility and instruments at the University of Saskatchewan. During motion of the blade, load cells collect data and send them to a computer. The measured forces using load cells had noisy signals which are needed to be filtered. The FE results are compared with experimental results for verification. This technique can be used in blade shape optimization and design of more complicated blade’s shape.

Keywords: finite element analysis, experimental results, blade force, soil-blade contact modeling

Procedia PDF Downloads 318
16992 Modeling of the Friction Behavior of Carbon/Epoxy Prepreg Composite

Authors: David Aveiga, Carlos Gonzalez

Abstract:

Thermoforming of pre-impregnated composites (prepreg) is the most employed process to build high-performance composite structures due to their visible advantage over alternative manufacturing techniques. This method allows easy shape moulding with a simple manufacturing system and a more refined outcome. The achievement of complex geometries can be exposed to undesired defects such as wrinkles. It is known that interply and ply-mould sliding behavior governs this defect generation. This work analyses interply and ply-mould friction coefficients for UD AS4/8552 Carbon/Epoxy prepreg. Friction coefficients are determined by a pull-out test method considering actual velocity, pressure and temperature conditions employed in a thermoforming process of an aeronautical composite component. A Stribeck curve is then constructed to find a mathematical expression that relates all the friction coefficients with the test variables through the Hersey number parameter. Two expressions are proposed to model ply-ply and ply-tool friction behaviors.

Keywords: friction, prepreg composite, stribeck curve, thermoforming.

Procedia PDF Downloads 182
16991 Development of a Laboratory Laser-Produced Plasma “Water Window” X-Ray Source for Radiobiology Experiments

Authors: Daniel Adjei, Mesfin Getachew Ayele, Przemyslaw Wachulak, Andrzej Bartnik, Luděk Vyšín, Henryk Fiedorowicz, Inam Ul Ahad, Lukasz Wegrzynski, Anna Wiechecka, Janusz Lekki, Wojciech M. Kwiatek

Abstract:

Laser produced plasma light sources, emitting high intensity pulses of X-rays, delivering high doses are useful to understand the mechanisms of high dose effects on biological samples. In this study, a desk-top laser plasma soft X-ray source, developed for radio biology research, is presented. The source is based on a double-stream gas puff target, irradiated with a commercial Nd:YAG laser (EKSPLA), which generates laser pulses of 4 ns time duration and energy up to 800 mJ at 10 Hz repetition rate. The source has been optimized for maximum emission in the “water window” wavelength range from 2.3 nm to 4.4 nm by using pure gas (argon, nitrogen and krypton) and spectral filtering. Results of the source characterization measurements and dosimetry of the produced soft X-ray radiation are shown and discussed. The high brightness of the laser produced plasma soft X-ray source and the low penetration depth of the produced X-ray radiation in biological specimen allows a high dose rate to be delivered to the specimen of over 28 Gy/shot; and 280 Gy/s at the maximum repetition rate of the laser system. The source has a unique capability for irradiation of cells with high pulse dose both in vacuum and He-environment. Demonstration of the source to induce DNA double- and single strand breaks will be discussed.

Keywords: laser produced plasma, soft X-rays, radio biology experiments, dosimetry

Procedia PDF Downloads 585
16990 Upconversion Nanomaterials for Applications in Life Sciences and Medicine

Authors: Yong Zhang

Abstract:

Light has proven to be useful in a wide range of biomedical applications such as fluorescence imaging, photoacoustic imaging, optogenetics, photodynamic therapy, photothermal therapy, and light controlled drug/gene delivery. Taking photodynamic therapy (PDT) as an example, PDT has been proven clinically effective in early lung cancer, bladder cancer, head, and neck cancer and is the primary treatment for skin cancer as well. However, clinical use of PDT is severely constrained by the low penetration depth of visible light through thick tissue, limiting its use to target regions only a few millimeters deep. One way to enhance the range is to use invisible near-infrared (NIR) light within the optical window (700–1100nm) for biological tissues, extending the depth up to 1cm with no observable damage to the intervening tissue. We have demonstrated use of NIR-to-visible upconversion fluorescent nanoparticles (UCNPs), emitting visible fluorescence when excited by a NIR light at 980nm, as a nanotransducer for PDT to convert deep tissue-penetrating NIR light to visible light suitable for activating photosensitizers. The unique optical properties of UCNPs enable the upconversion wavelength to be tuned and matched to the activation absorption wavelength of the photosensitizer. At depths beyond 1cm, however, tissue remains inaccessible to light even within the NIR window, and this critical depth limitation renders existing phototherapy ineffective against most deep-seated cancers. We have demonstrated some new treatment modalities for deep-seated cancers based on UCNP hydrogel implants and miniaturized, wirelessly powered optoelectronic devices for light delivery to deep tissues.

Keywords: upconversion, fluorescent, nanoparticle, bioimaging, photodynamic therapy

Procedia PDF Downloads 159