Search results for: data structure
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 30342

Search results for: data structure

28512 Deep Mill Level Zone (DMLZ) of Ertsberg East Skarn System, Papua; Correlation between Structure and Mineralization to Determined Characteristic Orebody of DMLZ Mine

Authors: Bambang Antoro, Lasito Soebari, Geoffrey de Jong, Fernandy Meiriyanto, Michael Siahaan, Eko Wibowo, Pormando Silalahi, Ruswanto, Adi Budirumantyo

Abstract:

The Ertsberg East Skarn System (EESS) is located in the Ertsberg Mining District, Papua, Indonesia. EESS is a sub-vertical zone of copper-gold mineralization hosted in both diorite (vein-style mineralization) and skarn (disseminated and vein style mineralization). Deep Mill Level Zone (DMLZ) is a mining zone in the lower part of East Ertsberg Skarn System (EESS) that product copper and gold. The Deep Mill Level Zone deposit is located below the Deep Ore Zone deposit between the 3125m to 2590m elevation, measures roughly 1,200m in length and is between 350 and 500m in width. DMLZ planned start mined on Q2-2015, being mined at an ore extraction rate about 60,000 tpd by the block cave mine method (the block cave contain 516 Mt). Mineralization and associated hydrothermal alteration in the DMLZ is hosted and enclosed by a large stock (The Main Ertsberg Intrusion) that is barren on all sides and above the DMLZ. Late porphyry dikes that cut through the Main Ertsberg Intrusion are spatially associated with the center of the DMLZ hydrothermal system. DMLZ orebody hosted in diorite and skarn, both dominantly by vein style mineralization. Percentage Material Mined at DMLZ compare with current Reserves are diorite 46% (with 0.46% Cu; 0.56 ppm Au; and 0.83% EqCu); Skarn is 39% (with 1.4% Cu; 0.95 ppm Au; and 2.05% EqCu); Hornfels is 8% (with 0.84% Cu; 0.82 ppm Au; and 1.39% EqCu); and Marble 7 % possible mined waste. Correlation between Ertsberg intrusion, major structure, and vein style mineralization is important to determine characteristic orebody in DMLZ Mine. Generally Deep Mill Level Zone has 2 type of vein filling mineralization from both hosted (diorite and skarn), in diorite hosted the vein system filled by chalcopyrite-bornite-quartz and pyrite, in skarn hosted the vein filled by chalcopyrite-bornite-pyrite and magnetite without quartz. Based on orientation the stockwork vein at diorite hosted and shallow vein in skarn hosted was generally NW-SE trending and NE-SW trending with shallow-moderate dipping. Deep Mill Level Zone control by two main major faults, geologist founded and verified local structure between major structure with NW-SE trending and NE-SW trending with characteristics slickenside, shearing, gauge, water-gas channel, and some has been re-healed.

Keywords: copper-gold, DMLZ, skarn, structure

Procedia PDF Downloads 489
28511 Design Application Procedures of 15 Storied 3D Reinforced Concrete Shear Wall-Frame Structure

Authors: H. Nikzad, S. Yoshitomi

Abstract:

This paper presents the design application and reinforcement detailing of 15 storied reinforced concrete shear wall-frame structure based on linear static analysis. Databases are generated for section sizes based on automated structural optimization method utilizing Active-set Algorithm in MATLAB platform. The design constraints of allowable section sizes, capacity criteria and seismic provisions for static loads, combination of gravity and lateral loads are checked and determined based on ASCE 7-10 documents and ACI 318-14 design provision. The result of this study illustrates the efficiency of proposed method, and is expected to provide a useful reference in designing of RC shear wall-frame structures.

Keywords: design constraints, ETABS, linear static analysis, MATLAB, RC shear wall-frame structures, structural optimization

Procedia PDF Downloads 245
28510 Planktivorous Fish Schooling Responses to Current at Natural and Artificial Reefs

Authors: Matthew Holland, Jason Everett, Martin Cox, Iain Suthers

Abstract:

High spatial-resolution distribution of planktivorous reef fish can reveal behavioural adaptations to optimise the balance between feeding success and predator avoidance. We used a multi-beam echosounder to record bathymetry and the three-dimensional distribution of fish schools associated with natural and artificial reefs. We utilised generalised linear models to assess the distribution, orientation, and aggregation of fish schools relative to the structure, vertical relief, and currents. At artificial reefs, fish schooled more closely to the structure and demonstrated a preference for the windward side, particularly when exposed to strong currents. Similarly, at natural reefs fish demonstrated a preference for windward aspects of bathymetry, particularly when associated with high vertical relief. Our findings suggest that under conditions with stronger current velocity, fish can exercise their preference to remain close to structure for predator avoidance, while still receiving an adequate supply of zooplankton delivered by the current. Similarly, when current velocity is low, fish tend to disperse for better access to zooplankton. As artificial reefs are generally deployed with the goal of creating productivity rather than simply attracting fish from elsewhere, we advise that future artificial reefs be designed as semi-linear arrays perpendicular to the prevailing current, with multiple tall towers. This will facilitate the conversion of dispersed zooplankton into energy for higher trophic levels, enhancing reef productivity and fisheries.

Keywords: artificial reef, current, forage fish, multi-beam, planktivorous fish, reef fish, schooling

Procedia PDF Downloads 141
28509 Modeling of Complex Structures: Shear Wall with Openings and Stiffened Shells

Authors: Temami Oussama, Bessais Lakhdar, Hamadi Djamal, Abderrahmani Sifeddine

Abstract:

The analysis of complex structures encourages the engineer to make simplifying assumptions, sometimes attempting the analysis of the whole structure as complex as it is, and it can be done using the finite element method (FEM). In the modeling of complex structures by finite elements, various elements can be used: beam element, membrane element, solid element, plates and shells elements. These elements formulated according to the classical formulation and do not generally share the same nodal degrees of freedom, which complicates the development of a compatible model. The compatibility of the elements with each other is often a difficult problem for modeling complicated structure. This compatibility is necessary to ensure the convergence. To overcome this problem, we have proposed finite elements with a rotational degree of freedom. The study used is based on the strain approach formulation with 2D and 3D formulation with different degrees of freedom at each node. For the comparison and confrontation of results; the finite elements available in ABAQUS/Standard are used.

Keywords: compatibility requirement, complex structures, finite elements, modeling, strain approach

Procedia PDF Downloads 427
28508 Gamification of eHealth Business Cases to Enhance Rich Learning Experience

Authors: Kari Björn

Abstract:

Introduction of games has expanded the application area of computer-aided learning tools to wide variety of age groups of learners. Serious games engage the learners into a real-world -type of simulation and potentially enrich the learning experience. Institutional background of a Bachelor’s level engineering program in Information and Communication Technology is introduced, with detailed focus on one of its majors, Health Technology. As part of a Customer Oriented Software Application thematic semester, one particular course of “eHealth Business and Solutions” is described and reflected in a gamified framework. Learning a consistent view into vast literature of business management, strategies, marketing and finance in a very limited time enforces selection of topics relevant to the industry. Health Technology is a novel and growing industry with a growing sector in consumer wearable devices and homecare applications. The business sector is attracting new entrepreneurs and impatient investor funds. From engineering education point of view the sector is driven by miniaturizing electronics, sensors and wireless applications. However, the market is highly consumer-driven and usability, safety and data integrity requirements are extremely high. When the same technology is used in analysis or treatment of patients, very strict regulatory measures are enforced. The paper introduces a course structure using gamification as a tool to learn the most essential in a new market: customer value proposition design, followed by a market entry game. Students analyze the existing market size and pricing structure of eHealth web-service market and enter the market as a steering group of their company, competing against the legacy players and with each other. The market is growing but has its rules of demand and supply balance. New products can be developed with an R&D-investment, and targeted to market with unique quality- and price-combinations. Product cost structure can be improved by investing to enhanced production capacity. Investments can be funded optionally by foreign capital. Students make management decisions and face the dynamics of the market competition in form of income statement and balance sheet after each decision cycle. The focus of the learning outcome is to understand customer value creation to be the source of cash flow. The benefit of gamification is to enrich the learning experience on structure and meaning of financial statements. The paper describes the gamification approach and discusses outcomes after two course implementations. Along the case description of learning challenges, some unexpected misconceptions are noted. Improvements of the game or the semi-gamified teaching pedagogy are discussed. The case description serves as an additional support to new game coordinator, as well as helps to improve the method. Overall, the gamified approach has helped to engage engineering student to business studies in an energizing way.

Keywords: engineering education, integrated curriculum, learning experience, learning outcomes

Procedia PDF Downloads 227
28507 Wettability Properties of Pineapple Leaf Fibers and Banana Pseudostem Fibers Treated by Cold Plasma

Authors: Tatiana Franco, Hugo A. Estupinan

Abstract:

Banana pseudostem fiber (BPF) and pineapple leaf fiber (PLF) for their excellent mechanical properties and biodegradability characteristics arouse interest in different areas of research. F In tropical regions, where the banana pseudostem and the pineapple leaf are transformed into hard-to-handle solid waste, they can be low-cost raw material and environmentally sustainable in research for composite materials. In terms of functionality of this type of fiber, an open structure would allow the adsorption and retention of organic, inorganic and metallic species. In general, natural fibers have closed structures on their surface with intricate internal arrangements that can be used for the solution of environmental problems and other technological uses, however it is not possible to access their internal structure and sublayers, exposing the fibers in the natural state. An alternative method to chemical and enzymatic treatment are the processes with the plasma treatments, which are known to be clean, economical and controlled. In this type of treatment, a gas contained in a reactor in the form of plasma acts on the fiber generating changes in its structure, morphology and topography. This work compares the effects on fibers of PLF and BPF treated with cold argon plasma, alternating time and current. These fibers are grown in the regions of Antioquia-Colombia. The morphological, compositional and wettability properties of the fibers were analyzed by Raman microscopy, contact angle measurements, scanning electron microscopy (SEM) and atomic force microscopy analysis (AFM). The treatment with cold plasma on PLF and BPF allowed increasing its wettability, the topography and the microstructural relationship between lignin and cellulose.

Keywords: cold plasma, contact angle, natural fibers, Raman, SEM, wettability

Procedia PDF Downloads 140
28506 PAPR Reduction of FBMC Using Sliding Window Tone Reservation Active Constellation Extension Technique

Authors: S. Anuradha, V. Sandeep Kumar

Abstract:

The high Peak to Average Power Ratio (PAR) in Filter Bank Multicarrier with Offset Quadrature Amplitude Modulation (FBMC-OQAM) can significantly reduce power efficiency and performance. In this paper, we address the problem of PAPR reduction for FBMC-OQAM systems using Tone Reservation (TR) technique. Due to the overlapping structure of FBMCOQAM signals, directly applying TR schemes of OFDM systems to FBMC-OQAM systems is not effective. We improve the tone reservation (TR) technique by employing sliding window with Active Constellation Extension for the PAPR reduction of FBMC-OQAM signals, called sliding window tone reservation Active Constellation Extension (SW-TRACE) technique. The proposed SW-TRACE technique uses the peak reduction tones (PRTs) of several consecutive data blocks to cancel the peaks of the FBMC-OQAM signal inside a window, with dynamically extending outer constellation points in active(data-carrying) channels, within margin-preserving constraints, in order to minimize the peak magnitude. Analysis and simulation results compared to the existing Tone Reservation (TR) technique for FBMC/OQAM system. The proposed method SW-TRACE has better PAPR performance and lower computational complexity.

Keywords: FBMC-OQAM, peak-to-average power ratio, sliding window, tone reservation Active Constellation Extension

Procedia PDF Downloads 427
28505 Structural, Electronic and Optical Properties of LiₓNa1-ₓH for Hydrogen Storage

Authors: B. Bahloul

Abstract:

This study investigates the structural, electronic, and optical properties of LiH and NaH compounds, as well as their ternary mixed crystals LiₓNa1-ₓH, adopting a face-centered cubic structure with space group Fm-3m (number 225). The structural and electronic characteristics are examined using density functional theory (DFT), while empirical methods, specifically the modified Moss relation, are employed for analyzing optical properties. The exchange-correlation potential is determined through the generalized gradient approximation (PBEsol-GGA) within the density functional theory (DFT) framework, utilizing the projected augmented wave pseudopotentials (PAW) approach. The Quantum Espresso code is employed for conducting these calculations. The calculated lattice parameters at equilibrium volume and the bulk modulus for x=0 and x=1 exhibit good agreement with existing literature data. Additionally, the LiₓNa1-ₓH alloys are identified as having a direct band gap.

Keywords: DFT, structural, electronic, optical properties

Procedia PDF Downloads 48
28504 Authorization of Commercial Communication Satellite Grounds for Promoting Turkish Data Relay System

Authors: Celal Dudak, Aslı Utku, Burak Yağlioğlu

Abstract:

Uninterrupted and continuous satellite communication through the whole orbit time is becoming more indispensable every day. Data relay systems are developed and built for various high/low data rate information exchanges like TDRSS of USA and EDRSS of Europe. In these missions, a couple of task-dedicated communication satellites exist. In this regard, for Turkey a data relay system is attempted to be defined exchanging low data rate information (i.e. TTC) for Earth-observing LEO satellites appointing commercial GEO communication satellites all over the world. First, justification of this attempt is given, demonstrating duration enhancements in the link. Discussion of preference of RF communication is, also, given instead of laser communication. Then, preferred communication GEOs – including TURKSAT4A already belonging to Turkey- are given, together with the coverage enhancements through STK simulations and the corresponding link budget. Also, a block diagram of the communication system is given on the LEO satellite.

Keywords: communication, GEO satellite, data relay system, coverage

Procedia PDF Downloads 421
28503 The Development of Encrypted Near Field Communication Data Exchange Format Transmission in an NFC Passive Tag for Checking the Genuine Product

Authors: Tanawat Hongthai, Dusit Thanapatay

Abstract:

This paper presents the development of encrypted near field communication (NFC) data exchange format transmission in an NFC passive tag for the feasibility of implementing a genuine product authentication. We propose a research encryption and checking the genuine product into four major categories; concept, infrastructure, development and applications. This result shows the passive NFC-forum Type 2 tag can be configured to be compatible with the NFC data exchange format (NDEF), which can be automatically partially data updated when there is NFC field.

Keywords: near field communication, NFC data exchange format, checking the genuine product, encrypted NFC

Procedia PDF Downloads 264
28502 Time-Dependent Analysis of Composite Steel-Concrete Beams Subjected to Shrinkage

Authors: Rahal Nacer, Beghdad Houda, Tehami Mohamed, Souici Abdelaziz

Abstract:

Although the shrinkage of the concrete causes undesirable parasitic effects to the structure, it can then harm the resistance and the good appearance of the structure. Long term behaviourmodelling of steel-concrete composite beams requires the use of the time variable and the taking into account of all the sustained stress history of the concrete slab constituting the cross section. The work introduced in this article is a theoretical study of the behaviour of composite beams with respect to the phenomenon of concrete shrinkage. While using the theory of the linear viscoelasticity of the concrete, and on the basis of the rate of creep method, in proposing an analytical model, made up by a system of two linear differential equations, emphasizing the effects caused by shrinkage on the resistance of a steel-concrete composite beams. Results obtained from the application of the suggested model to a steel-concrete composite beam are satisfactory.

Keywords: composite beams, shrinkage, time, rate of creep method, viscoelasticity theory

Procedia PDF Downloads 508
28501 Earthquake Resistant Sustainable Steel Green Building

Authors: Arup Saha Chaudhuri

Abstract:

Structural steel is a very ductile material with high strength carrying capacity, thus it is very useful to make earthquake resistant buildings. It is a homogeneous material also. The member section and the structural system can be made very efficient for economical design. As the steel is recyclable and reused, it is a green material. The embodied energy for the efficiently designed steel structure is less than the RC structure. For sustainable green building steel is the best material nowadays. Moreover, pre-engineered and pre-fabricated faster construction methodologies help the development work to complete within the stipulated time. In this paper, the usefulness of Eccentric Bracing Frame (EBF) in steel structure over Moment Resisting Frame (MRF) and Concentric Bracing Frame (CBF) is shown. Stability of the steel structures against horizontal forces especially in seismic condition is efficiently possible by Eccentric bracing systems with economic connection details. The EBF is pin–ended, but the beam-column joints are designed for pin ended or for full connectivity. The EBF has several desirable features for seismic resistance. In comparison with CBF system, EBF system can be designed for appropriate stiffness and drift control. The link beam is supposed to yield in shear or flexure before initiation of yielding or buckling of the bracing member in tension or compression. The behavior of a 2-D steel frame is observed under seismic loading condition in the present paper. Ductility and brittleness of the frames are compared with respect to time period of vibration and dynamic base shear. It is observed that the EBF system is better than MRF system comparing the time period of vibration and base shear participation.

Keywords: steel building, green and sustainable, earthquake resistant, EBF system

Procedia PDF Downloads 337
28500 Data Hiding by Vector Quantization in Color Image

Authors: Yung Gi Wu

Abstract:

With the growing of computer and network, digital data can be spread to anywhere in the world quickly. In addition, digital data can also be copied or tampered easily so that the security issue becomes an important topic in the protection of digital data. Digital watermark is a method to protect the ownership of digital data. Embedding the watermark will influence the quality certainly. In this paper, Vector Quantization (VQ) is used to embed the watermark into the image to fulfill the goal of data hiding. This kind of watermarking is invisible which means that the users will not conscious the existing of embedded watermark even though the embedded image has tiny difference compared to the original image. Meanwhile, VQ needs a lot of computation burden so that we adopt a fast VQ encoding scheme by partial distortion searching (PDS) and mean approximation scheme to speed up the data hiding process. The watermarks we hide to the image could be gray, bi-level and color images. Texts are also can be regarded as watermark to embed. In order to test the robustness of the system, we adopt Photoshop to fulfill sharpen, cropping and altering to check if the extracted watermark is still recognizable. Experimental results demonstrate that the proposed system can resist the above three kinds of tampering in general cases.

Keywords: data hiding, vector quantization, watermark, color image

Procedia PDF Downloads 346
28499 Construction and Application of Zr-MCM41 Nanoreactors as Highly Active and Efficiently Catalyst in the Synthesis of Biginelli-Type Compounds

Authors: Zohreh Derikvand

Abstract:

Nanoreactors Zr-MCM-41were prepared via the reaction of ZrOCl2, Fumed silica, sodium hydroxide and cethyltrimethyl ammonium bromide under hydrothermal condition. The prepared nanoreactors were characterized by FT-IR spectroscopy, X-ray diffraction (XRD), Scanning electron micrographs (SEM) and nitrogen adsorption-desorption. The XRD pattern of Zr-MCM-41 exhibits a high-intensity (100) and two low-intensity reflections (110 and 200) which are characteristic of hexagonal structure, exhibiting the long-range order and good textural uniformity of mesoporous structure. Based on the green chemistry approach, we report an efficient and environmentally benign protocol to study the catalytic activity of Zr-MCM-41 in the Biginelli type reactions initially. Nanoreactors Zr-MCM-41 were used as highly recoverable and reusable catalyst for synthesis of 3,4-dihydropyrimidin-2(1H)-one, octahydroquinazolinone, benzimidazolo-quinazolineone and 4,6-diarylpyrimidin-2(1H)-one. The methodology offers several advantages such as short reaction time, high yields and simple operation. The catalyst was active up to three cycles.

Keywords: Zr-MCM-41 nanoreactors, Biginelli like reactions, 3, 4-dihydropyrimidin-2(1H)-ones, ctahydroquinazolinones

Procedia PDF Downloads 192
28498 Extraction of Forest Plantation Resources in Selected Forest of San Manuel, Pangasinan, Philippines Using LiDAR Data for Forest Status Assessment

Authors: Mark Joseph Quinto, Roan Beronilla, Guiller Damian, Eliza Camaso, Ronaldo Alberto

Abstract:

Forest inventories are essential to assess the composition, structure and distribution of forest vegetation that can be used as baseline information for management decisions. Classical forest inventory is labor intensive and time-consuming and sometimes even dangerous. The use of Light Detection and Ranging (LiDAR) in forest inventory would improve and overcome these restrictions. This study was conducted to determine the possibility of using LiDAR derived data in extracting high accuracy forest biophysical parameters and as a non-destructive method for forest status analysis of San Manual, Pangasinan. Forest resources extraction was carried out using LAS tools, GIS, Envi and .bat scripts with the available LiDAR data. The process includes the generation of derivatives such as Digital Terrain Model (DTM), Canopy Height Model (CHM) and Canopy Cover Model (CCM) in .bat scripts followed by the generation of 17 composite bands to be used in the extraction of forest classification covers using ENVI 4.8 and GIS software. The Diameter in Breast Height (DBH), Above Ground Biomass (AGB) and Carbon Stock (CS) were estimated for each classified forest cover and Tree Count Extraction was carried out using GIS. Subsequently, field validation was conducted for accuracy assessment. Results showed that the forest of San Manuel has 73% Forest Cover, which is relatively much higher as compared to the 10% canopy cover requirement. On the extracted canopy height, 80% of the tree’s height ranges from 12 m to 17 m. CS of the three forest covers based on the AGB were: 20819.59 kg/20x20 m for closed broadleaf, 8609.82 kg/20x20 m for broadleaf plantation and 15545.57 kg/20x20m for open broadleaf. Average tree counts for the tree forest plantation was 413 trees/ha. As such, the forest of San Manuel has high percent forest cover and high CS.

Keywords: carbon stock, forest inventory, LiDAR, tree count

Procedia PDF Downloads 364
28497 Anomaly Detection in a Data Center with a Reconstruction Method Using a Multi-Autoencoders Model

Authors: Victor Breux, Jérôme Boutet, Alain Goret, Viviane Cattin

Abstract:

Early detection of anomalies in data centers is important to reduce downtimes and the costs of periodic maintenance. However, there is little research on this topic and even fewer on the fusion of sensor data for the detection of abnormal events. The goal of this paper is to propose a method for anomaly detection in data centers by combining sensor data (temperature, humidity, power) and deep learning models. The model described in the paper uses one autoencoder per sensor to reconstruct the inputs. The auto-encoders contain Long-Short Term Memory (LSTM) layers and are trained using the normal samples of the relevant sensors selected by correlation analysis. The difference signal between the input and its reconstruction is then used to classify the samples using feature extraction and a random forest classifier. The data measured by the sensors of a data center between January 2019 and May 2020 are used to train the model, while the data between June 2020 and May 2021 are used to assess it. Performances of the model are assessed a posteriori through F1-score by comparing detected anomalies with the data center’s history. The proposed model outperforms the state-of-the-art reconstruction method, which uses only one autoencoder taking multivariate sequences and detects an anomaly with a threshold on the reconstruction error, with an F1-score of 83.60% compared to 24.16%.

Keywords: anomaly detection, autoencoder, data centers, deep learning

Procedia PDF Downloads 175
28496 The Way of Life of the Civil Servant Community under the Bureau of the Royal Household: A Case Study of Tha Wasukri, Bangkok

Authors: Vilasinee Jintalikhitdee, Saowapa Phaithayawat

Abstract:

The research on “The Way of Life of the Civil Servant Community under the Bureau of the Royal Household” aims to study 1) the way of life of the people who live in the civil servant community in Tha Wasukri, and 2) the model of community administration of civil servants under the Bureau of the Royal Household. This research is conducted qualitatively and quantitatively by collecting data from interviews, focus group discussion, participant and non-participant observation along with the data from the questionnaire based on age groups which include elder group, working age group and youth group. The result of the research shows that the origin of this community is related to the history during the Rama V’s reign. It has been a harbor for the king to boat in any royal ceremonies; this custom is still maintained until today. The status or position of person who serves the king in terms of working is often inherited from the bureau of the Royal Household based on his/her consanguinity and, hence, further receives the rights to live in the Tha Wasukri area. Therefore, this community has some special characteristics demonstrating the way of living influenced by the regulation of the Bureau of the Royal Household such as respecting elders and interdependence in which there is internal social organization with the practice of bureaucracy in going in and out the community. The person who has rights to live here must be friendly to everybody so that this community will be a safe place for lives and property. The administration based on the model of Bangkok for local administration was used as an external structure only, but the way of living still follows the practice of the Bureau of the Royal Household.

Keywords: way of life, community, Tha Wasukri, Bureau of the Royal Household

Procedia PDF Downloads 449
28495 Fractal Nature of Granular Mixtures of Different Concretes Formulated with Different Methods of Formulation

Authors: Fatima Achouri, Kaddour Chouicha, Abdelwahab Khatir

Abstract:

It is clear that concrete of quality must be made with selected materials chosen in optimum proportions that remain after implementation, a minimum of voids in the material produced. The different methods of formulations what we use, are based for the most part on a granular curve which describes an ‘optimal granularity’. Many authors have engaged in fundamental research on granular arrangements. A comparison of mathematical models reproducing these granular arrangements with experimental measurements of compactness have to verify that the minimum porosity P according to the following extent granular exactly a power law. So the best compactness in the finite medium are obtained with power laws, such as Furnas, Fuller or Talbot, each preferring a particular setting between 0.20 and 0.50. These considerations converge on the assumption that the optimal granularity Caquot approximates by a power law. By analogy, it can then be analyzed as a granular structure of fractal-type since the properties that characterize the internal similarity fractal objects are reflected also by a power law. Optimized mixtures may be described as a series of installments falling granular stuff to better the tank on a regular hierarchical distribution which would give at different scales, by cascading effects, the same structure to the mix. Likely this model may be appropriate for the entire extent of the size distribution of the components, since the cement particles (and silica fume) correctly deflocculated, micrometric dimensions, to chippings sometimes several tens of millimeters. As part of this research, the aim is to give an illustration of the application of fractal analysis to characterize the granular concrete mixtures optimized for a so-called fractal dimension where different concretes were studying that we proved a fractal structure of their granular mixtures regardless of the method of formulation or the type of concrete.

Keywords: concrete formulation, fractal character, granular packing, method of formulation

Procedia PDF Downloads 238
28494 Submicron Size of Alumina/Titania Tubes for CO2-CH4 Conversion

Authors: Chien-Wan Hun, Shao-Fu Chang, Jheng-En Yang, Chien-Chon Chen, Wern-Dare Jheng

Abstract:

This research provides a systematic way to study and better understand double nano-tubular structure of alunina (Al2O3) and titania (TiO2). The TiO2 NT was prepared by immersing Al2O3 template in 0.02 M titanium fluoride (TiF4) solution (pH=3) at 25 °C for 120 min, followed by annealing at 450 °C for 1 h to obtain anatase TiO2 NT in the Al2O3 template. Large-scale development of film for nanotube-based CO2 capture and conversion can potentially result in more efficient energy harvesting. In addition, the production process will be relatively environmentally friendly. The knowledge generated by this research will significantly advance research in the area of Al2O3, TiO2, CaO, and Ca2O3 nano-structure film fabrication and applications for CO2 capture and conversion. This green energy source will potentially reduce reliance on carbon-based energy resources and increase interest in science and engineering careers.

Keywords: alumina, titania, nano-tubular, film, CO2

Procedia PDF Downloads 382
28493 Covariance of the Queue Process Fed by Isonormal Gaussian Input Process

Authors: Samaneh Rahimirshnani, Hossein Jafari

Abstract:

In this paper, we consider fluid queueing processes fed by an isonormal Gaussian process. We study the correlation structure of the queueing process and the rate of convergence of the running supremum in the queueing process. The Malliavin calculus techniques are applied to obtain relations that show the workload process inherits the dependence properties of the input process. As examples, we consider two isonormal Gaussian processes, the sub-fractional Brownian motion (SFBM) and the fractional Brownian motion (FBM). For these examples, we obtain upper bounds for the covariance function of the queueing process and its rate of convergence to zero. We also discover that the rate of convergence of the queueing process is related to the structure of the covariance function of the input process.

Keywords: queue length process, Malliavin calculus, covariance function, fractional Brownian motion, sub-fractional Brownian motion

Procedia PDF Downloads 45
28492 Hybrid Precoder Design Based on Iterative Hard Thresholding Algorithm for Millimeter Wave Multiple-Input-Multiple-Output Systems

Authors: Ameni Mejri, Moufida Hajjaj, Salem Hasnaoui, Ridha Bouallegue

Abstract:

The technology advances have most lately made the millimeter wave (mmWave) communication possible. Due to the huge amount of spectrum that is available in MmWave frequency bands, this promising candidate is considered as a key technology for the deployment of 5G cellular networks. In order to enhance system capacity and achieve spectral efficiency, very large antenna arrays are employed at mmWave systems by exploiting array gain. However, it has been shown that conventional beamforming strategies are not suitable for mmWave hardware implementation. Therefore, new features are required for mmWave cellular applications. Unlike traditional multiple-input-multiple-output (MIMO) systems for which only digital precoders are essential to accomplish precoding, MIMO technology seems to be different at mmWave because of digital precoding limitations. Moreover, precoding implements a greater number of radio frequency (RF) chains supporting more signal mixers and analog-to-digital converters. As RF chain cost and power consumption is increasing, we need to resort to another alternative. Although the hybrid precoding architecture has been regarded as the best solution based on a combination between a baseband precoder and an RF precoder, we still do not get the optimal design of hybrid precoders. According to the mapping strategies from RF chains to the different antenna elements, there are two main categories of hybrid precoding architecture. Given as a hybrid precoding sub-array architecture, the partially-connected structure reduces hardware complexity by using a less number of phase shifters, whereas it sacrifices some beamforming gain. In this paper, we treat the hybrid precoder design in mmWave MIMO systems as a problem of matrix factorization. Thus, we adopt the alternating minimization principle in order to solve the design problem. Further, we present our proposed algorithm for the partially-connected structure, which is based on the iterative hard thresholding method. Through simulation results, we show that our hybrid precoding algorithm provides significant performance gains over existing algorithms. We also show that the proposed approach reduces significantly the computational complexity. Furthermore, valuable design insights are provided when we use the proposed algorithm to make simulation comparisons between the hybrid precoding partially-connected structure and the fully-connected structure.

Keywords: alternating minimization, hybrid precoding, iterative hard thresholding, low-complexity, millimeter wave communication, partially-connected structure

Procedia PDF Downloads 304
28491 The Effect of an Infill on the Bearing Capacity and Stiffness of Infilled Frames

Authors: Goran Baloevic, Jure Radnic, Nikola Grgic

Abstract:

The application of frames with masonry or panel infill is common in the engineering practice. In these cases, a frame is often considered to be a primary structure, while an infill is considered to be a secondary structure. In past calculations, the infill was rarely included in the design of frame structures in terms of their bearing capacity and safety. Recent calculations of such structures necessarily include the effect of infill since it contributes to stiffness and bearing capacity of overall system, especially under horizontal loads. In certain cases, if the infill is not included in the seismic design of frame structures, the result can be lower design safety. However, since the different configuration of the infill through the building’s height can be made, it is possible that contribution of such infill to the overall bearing capacity can be lower and seismic forces on the building can be increased due to greater stiffness of the structure. So far, many experimental and numerical researches on the behavior of infilled frames under horizontal static forces and earthquake have been performed. In this paper, several masonry-infilled concrete and steel frames under horizontal static forces and earthquake are analysed. The experimental results by shake-table and numerical results are compared in terms of the bearing capacity of bare and infilled frames. Herein, the stiffness of frames and infill were varied, with different position of the infill and different types of openings. Cases with positive and negative effects of the infill to the bearing capacity of the frames were considered. Finally, main conclusions and recommendations for practical application and design of masonry-infilled concrete and steel frames are given.

Keywords: bearing capacity, infilled frame, numerical model, shake table

Procedia PDF Downloads 446
28490 Enhancement of Pulsed Eddy Current Response Based on Power Spectral Density after Continuous Wavelet Transform Decomposition

Authors: A. Benyahia, M. Zergoug, M. Amir, M. Fodil

Abstract:

The main objective of this work is to enhance the Pulsed Eddy Current (PEC) response from the aluminum structure using signal processing. Cracks and metal loss in different structures cause changes in PEC response measurements. In this paper, time-frequency analysis is used to represent PEC response, which generates a large quantity of data and reduce the noise due to measurement. Power Spectral Density (PSD) after Wavelet Decomposition (PSD-WD) is proposed for defect detection. The experimental results demonstrate that the cracks in the surface can be extracted satisfactorily by the proposed methods. The validity of the proposed method is discussed.

Keywords: DT, pulsed eddy current, continuous wavelet transform, Mexican hat wavelet mother, defect detection, power spectral density.

Procedia PDF Downloads 217
28489 Predicting Subsurface Abnormalities Growth Using Physics-Informed Neural Networks

Authors: Mehrdad Shafiei Dizaji, Hoda Azari

Abstract:

The research explores the pioneering integration of Physics-Informed Neural Networks (PINNs) into the domain of Ground-Penetrating Radar (GPR) data prediction, akin to advancements in medical imaging for tracking tumor progression in the human body. This research presents a detailed development framework for a specialized PINN model proficient at interpreting and forecasting GPR data, much like how medical imaging models predict tumor behavior. By harnessing the synergy between deep learning algorithms and the physical laws governing subsurface structures—or, in medical terms, human tissues—the model effectively embeds the physics of electromagnetic wave propagation into its architecture. This ensures that predictions not only align with fundamental physical principles but also mirror the precision needed in medical diagnostics for detecting and monitoring tumors. The suggested deep learning structure comprises three components: a CNN, a spatial feature channel attention (SFCA) mechanism, and ConvLSTM, along with temporal feature frame attention (TFFA) modules. The attention mechanism computes channel attention and temporal attention weights using self-adaptation, thereby fine-tuning the visual and temporal feature responses to extract the most pertinent and significant visual and temporal features. By integrating physics directly into the neural network, our model has shown enhanced accuracy in forecasting GPR data. This improvement is vital for conducting effective assessments of bridge deck conditions and other evaluations related to civil infrastructure. The use of Physics-Informed Neural Networks (PINNs) has demonstrated the potential to transform the field of Non-Destructive Evaluation (NDE) by enhancing the precision of infrastructure deterioration predictions. Moreover, it offers a deeper insight into the fundamental mechanisms of deterioration, viewed through the prism of physics-based models.

Keywords: physics-informed neural networks, deep learning, ground-penetrating radar (GPR), NDE, ConvLSTM, physics, data driven

Procedia PDF Downloads 0
28488 Retrofitting Adaptive Reuse into Palaces of Northern India

Authors: Shefali Nayak

Abstract:

The architectural appeal, familiarity, and idiom of culturally significant structures are due to societal attachment to various movements, historical association or deviation. Generally, the urge to preserve a building in the northern part of India is driven either by emotional dogma or rational thinking, but, it is also influenced by traditional affinity. The northern region of India has an assortment of palaces and Havelis belonging to various time periods and families with vernacular yet signature style of architecture. Many of them are either successfully conserved by being put into adaptive reuse and some of them have been midst controversies and continued to remain in ruins. The research focuses on comparing successful examples of adaptive reuse such as Neemrana, Mehrangargh Fort palace with a few other merchant havelis converted into heritage hotels. Furthermore, evaluates the architectural aspects of structure, materials, plumbing and electrical installations, as well as specific challenges faced by heritage professionals practicing sustainability, while respecting traditional feelings of various stakeholders. This paper concludes through the analysis of the case study that, its highly unlikely for sustainable design cannot be used as a stand-alone application for heritage structures or cities, it needs the support of architecture conservation to be put into practice. However, it is often demanding to fit a new use of a building into an aged structure. This paper records modern-day generic requirements that reflect challenges faced by different architects, while conserving a heritage structure and retrofitting it into today's requisites. The research objective is to establish how conservation, restoration, and urban regeneration are closely related to sustainable architecture in historical cities.

Keywords: architecture conservation, architecture heritage, adaptive reuse, retrofitting, sustainability, urban regeneration

Procedia PDF Downloads 166
28487 Integration Process and Analytic Interface of different Environmental Open Data Sets with Java/Oracle and R

Authors: Pavel H. Llamocca, Victoria Lopez

Abstract:

The main objective of our work is the comparative analysis of environmental data from Open Data bases, belonging to different governments. This means that you have to integrate data from various different sources. Nowadays, many governments have the intention of publishing thousands of data sets for people and organizations to use them. In this way, the quantity of applications based on Open Data is increasing. However each government has its own procedures to publish its data, and it causes a variety of formats of data sets because there are no international standards to specify the formats of the data sets from Open Data bases. Due to this variety of formats, we must build a data integration process that is able to put together all kind of formats. There are some software tools developed in order to give support to the integration process, e.g. Data Tamer, Data Wrangler. The problem with these tools is that they need data scientist interaction to take part in the integration process as a final step. In our case we don’t want to depend on a data scientist, because environmental data are usually similar and these processes can be automated by programming. The main idea of our tool is to build Hadoop procedures adapted to data sources per each government in order to achieve an automated integration. Our work focus in environment data like temperature, energy consumption, air quality, solar radiation, speeds of wind, etc. Since 2 years, the government of Madrid is publishing its Open Data bases relative to environment indicators in real time. In the same way, other governments have published Open Data sets relative to the environment (like Andalucia or Bilbao). But all of those data sets have different formats and our solution is able to integrate all of them, furthermore it allows the user to make and visualize some analysis over the real-time data. Once the integration task is done, all the data from any government has the same format and the analysis process can be initiated in a computational better way. So the tool presented in this work has two goals: 1. Integration process; and 2. Graphic and analytic interface. As a first approach, the integration process was developed using Java and Oracle and the graphic and analytic interface with Java (jsp). However, in order to open our software tool, as second approach, we also developed an implementation with R language as mature open source technology. R is a really powerful open source programming language that allows us to process and analyze a huge amount of data with high performance. There are also some R libraries for the building of a graphic interface like shiny. A performance comparison between both implementations was made and no significant differences were found. In addition, our work provides with an Official Real-Time Integrated Data Set about Environment Data in Spain to any developer in order that they can build their own applications.

Keywords: open data, R language, data integration, environmental data

Procedia PDF Downloads 298
28486 The Relationship between Basic Human Needs and Opportunity Based on Social Progress Index

Authors: Ebru Ozgur Guler, Huseyin Guler, Sera Sanli

Abstract:

Social Progress Index (SPI) whose fundamentals have been thrown in the World Economy Forum is an index which aims to form a systematic basis for guiding strategy for inclusive growth which requires achieving both economic and social progress. In this research, it has been aimed to determine the relations among “Basic Human Needs” (BHN) (including four variables of ‘Nutrition and Basic Medical Care’, ‘Water and Sanitation’, ‘Shelter’ and ‘Personal Safety’) and “Opportunity” (OPT) (that is composed of ‘Personal Rights’, ‘Personal Freedom and Choice’, ‘Tolerance and Inclusion’, and ‘Access to Advanced Education’ components) dimensions of 2016 SPI for 138 countries which take place in the website of Social Progress Imperative by carrying out canonical correlation analysis (CCA) which is a data reduction technique that operates in a way to maximize the correlation between two variable sets. In the interpretation of results, the first pair of canonical variates pointing to the highest canonical correlation has been taken into account. The first canonical correlation coefficient has been found as 0.880 indicating to the high relationship between BHN and OPT variable sets. Wilk’s Lambda statistic has revealed that an overall effect of 0.809 is highly large for the full model in order to be counted as statistically significant (with a p-value of 0.000). According to the standardized canonical coefficients, the largest contribution to BHN set of variables has come from ‘shelter’ variable. The most effective variable in OPT set has been detected to be ‘access to advanced education’. Findings based on canonical loadings have also confirmed these results with respect to the contributions to the first canonical variates. When canonical cross loadings (structure coefficients) are examined, for the first pair of canonical variates, the largest contributions have been provided by ‘shelter’ and ‘access to advanced education’ variables. Since the signs for structure coefficients have been found to be negative for all variables; all OPT set of variables are positively related to all of the BHN set of variables. In case canonical communality coefficients which are the sum of the squares of structure coefficients across all interpretable functions are taken as the basis; amongst all variables, ‘personal rights’ and ‘tolerance and inclusion’ variables can be said not to be useful in the model with 0.318721 and 0.341722 coefficients respectively. On the other hand, while redundancy index for BHN set has been found to be 0.615; OPT set has a lower redundancy index with 0.475. High redundancy implies high ability for predictability. The proportion of the total variation in BHN set of variables that is explained by all of the opposite canonical variates has been calculated as 63% and finally, the proportion of the total variation in OPT set that is explained by all of the canonical variables in BHN set has been determined as 50.4% and a large part of this proportion belongs to the first pair. The results suggest that there is a high and statistically significant relationship between BHN and OPT. This relationship is generally accounted by ‘shelter’ and ‘access to advanced education’.

Keywords: canonical communality coefficient, canonical correlation analysis, redundancy index, social progress index

Procedia PDF Downloads 204
28485 Water Resources Green Efficiency in China: Evaluation, Spatial Association Network Structure Analysis, and Influencing Factors

Authors: Tingyu Zhang

Abstract:

This paper utilizes the Super-SBM model to assess water resources green efficiency (WRGE) among provinces in China and investigate its spatial and temporal features, based on the characteristic framework of “economy-environment-society.” The social network analysis is employed to examine the network pattern and spatial interaction of WRGE. Further, the quadratic assignment procedure method is utilized for examining the influencing factors of the spatial association of WRGE regarding “relationship.” The study reveals that: (1) the spatial distribution of WRGE demonstrates a distribution pattern of Eastern>Western>Central; (2) a remarkable spatial association exists among provinces; however, no strict hierarchical structure is observed. The internal structure of the WRGE network is characterized by the feature of "Eastern strong and Western weak". The block model analysis discovers that the members of the “net spillover” and “two-way spillover” blocks are mostly in the eastern and central provinces; “broker” block, which plays an intermediary role, is mostly in the central provinces; and members of the “net beneficiary” block are mostly in the western region. (3) Differences in economic development, degree of urbanization, water use environment, and water management have significant impacts on the spatial connection of WRGE. This study is dedicated to the realization of regional linkages and synergistic enhancement of WRGE, which provides a meaningful basis for building a harmonious society of human and water coexistence.

Keywords: water resources green efficiency, super-SBM model, social network analysis, quadratic assignment procedure

Procedia PDF Downloads 40
28484 Transforming Data into Knowledge: Mathematical and Statistical Innovations in Data Analytics

Authors: Zahid Ullah, Atlas Khan

Abstract:

The rapid growth of data in various domains has created a pressing need for effective methods to transform this data into meaningful knowledge. In this era of big data, mathematical and statistical innovations play a crucial role in unlocking insights and facilitating informed decision-making in data analytics. This abstract aims to explore the transformative potential of these innovations and their impact on converting raw data into actionable knowledge. Drawing upon a comprehensive review of existing literature, this research investigates the cutting-edge mathematical and statistical techniques that enable the conversion of data into knowledge. By evaluating their underlying principles, strengths, and limitations, we aim to identify the most promising innovations in data analytics. To demonstrate the practical applications of these innovations, real-world datasets will be utilized through case studies or simulations. This empirical approach will showcase how mathematical and statistical innovations can extract patterns, trends, and insights from complex data, enabling evidence-based decision-making across diverse domains. Furthermore, a comparative analysis will be conducted to assess the performance, scalability, interpretability, and adaptability of different innovations. By benchmarking against established techniques, we aim to validate the effectiveness and superiority of the proposed mathematical and statistical innovations in data analytics. Ethical considerations surrounding data analytics, such as privacy, security, bias, and fairness, will be addressed throughout the research. Guidelines and best practices will be developed to ensure the responsible and ethical use of mathematical and statistical innovations in data analytics. The expected contributions of this research include advancements in mathematical and statistical sciences, improved data analysis techniques, enhanced decision-making processes, and practical implications for industries and policymakers. The outcomes will guide the adoption and implementation of mathematical and statistical innovations, empowering stakeholders to transform data into actionable knowledge and drive meaningful outcomes.

Keywords: data analytics, mathematical innovations, knowledge extraction, decision-making

Procedia PDF Downloads 58
28483 FCNN-MR: A Parallel Instance Selection Method Based on Fast Condensed Nearest Neighbor Rule

Authors: Lu Si, Jie Yu, Shasha Li, Jun Ma, Lei Luo, Qingbo Wu, Yongqi Ma, Zhengji Liu

Abstract:

Instance selection (IS) technique is used to reduce the data size to improve the performance of data mining methods. Recently, to process very large data set, several proposed methods divide the training set into some disjoint subsets and apply IS algorithms independently to each subset. In this paper, we analyze the limitation of these methods and give our viewpoint about how to divide and conquer in IS procedure. Then, based on fast condensed nearest neighbor (FCNN) rule, we propose a large data sets instance selection method with MapReduce framework. Besides ensuring the prediction accuracy and reduction rate, it has two desirable properties: First, it reduces the work load in the aggregation node; Second and most important, it produces the same result with the sequential version, which other parallel methods cannot achieve. We evaluate the performance of FCNN-MR on one small data set and two large data sets. The experimental results show that it is effective and practical.

Keywords: instance selection, data reduction, MapReduce, kNN

Procedia PDF Downloads 238