Search results for: algorithms and data structure
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 32302

Search results for: algorithms and data structure

30982 Quantifying Stability of Online Communities and Its Impact on Disinformation

Authors: Victor Chomel, Maziyar Panahi, David Chavalarias

Abstract:

Misinformation has taken an increasingly worrying place in social media. Propagation patterns are closely linked to the structure of communities. This study proposes a method of community analysis based on a combination of centrality indicators for the network and its main communities. The objective is to establish a link between the stability of the communities over time, the social ascension of its members internally, and the propagation of information in the community. To this end, data from the debates about global warming and political communities on Twitter have been collected, and several tens of millions of tweets and retweets have helped us better understand the structure of these communities. The quantification of this stability allows for the study of the propagation of information of any kind, including disinformation. Our results indicate that the most stable communities over time are the ones that enable the establishment of nodes capturing a large part of the information and broadcasting its opinions. Conversely, communities with a high turnover and social ascendancy only stabilize themselves strongly in the face of adversity and external events but seem to offer a greater diversity of opinions most of the time.

Keywords: community analysis, disinformation, misinformation, Twitter

Procedia PDF Downloads 144
30981 Diagnostics of Existing Steel Structures of Winter Sport Halls

Authors: Marcela Karmazínová, Jindrich Melcher, Lubomír Vítek, Petr Cikrle

Abstract:

The paper deals with the diagnostics of steel roof structure of the winter sports stadiums built in 1970 year. The necessity of the diagnostics has been given by the requirement to the evaluation design of this structure, which has been caused by the new situation in the field of the loadings given by the validity of the European Standards in the Czech Republic from 2010 year. Due to these changes in the normative rules, in practice, existing structures are gradually subjected to the evaluation design and depending on its results to the strengthening or reconstruction, respectively. The steel roof is composed of plane truss main girders, purlins and bracings and the roof structure is supported by two arch main girders with the span of L=84 m. The in situ diagnostics of the roof structure was oriented to the following parts: (i) determination and evaluation of the actual material properties of used steel and (ii) verification of the actual dimensions of the structural members. For the solution, the non-destructive methods have been used for in situ measurement. For the indicative determination of steel strengths the modified method based on the determination of Rockwell’s hardness has been used. For the verification of the member’s dimensions (thickness of hollow sections) the ultrasound method has been used. This paper presents the results obtained using these testing methods and their evaluation, from the viewpoint of the usage for the subsequent static assessment and design evaluation of the existing structure. For the comparison, the examples of the similar evaluations realized for steel structures of the stadiums in Olomouc and Jihlava cities are briefly illustrated, too.

Keywords: actual dimensions, destructive methods, diagnostics, existing steel structure, indirect non-destructive methods, Rockwel’s hardness, sport hall, steel strength, ultrasound method.

Procedia PDF Downloads 344
30980 Shuffled Structure for 4.225 GHz Antireflective Plates: A Proposal Proven by Numerical Simulation

Authors: Shin-Ku Lee, Ming-Tsu Ho

Abstract:

A newly proposed antireflective selector with shuffled structure is reported in this paper. The proposed idea is made of two different quarter wavelength (QW) slabs and numerically supported by the one-dimensional simulation results provided by the method of characteristics (MOC) to function as an antireflective selector. These two QW slabs are characterized by dielectric constants εᵣA and εᵣB, uniformly divided into N and N+1 pieces respectively which are then shuffled to form an antireflective plate with B(AB)N structure such that there is always one εᵣA piece between two εᵣB pieces. Another is A(BA)N structure where every εᵣB piece is sandwiched by two εᵣA pieces. Both proposed structures are numerically proved to function as QW plates. In order to allow maximum transmission through the proposed structures, the two dielectric constants are chosen to have the relation of (εᵣA)² = εᵣB > 1. The advantages of the proposed structures over the traditional anti-reflection coating techniques are two components with two thicknesses and to shuffle to form new QW structures. The design wavelength used to validate the proposed idea is 71 mm corresponding to a frequency about 4.225 GHz. The computational results are shown in both time and frequency domains revealing that the proposed structures produce minimum reflections around the frequency of interest.

Keywords: method of characteristics, quarter wavelength, anti-reflective plate, propagation of electromagnetic fields

Procedia PDF Downloads 148
30979 Fluid Structure Interaction of Offshore Concrete Columns under Explosion Loads

Authors: Ganga K. V. Prakhya, V. Karthigeyan

Abstract:

The paper describes the influences of the fluid and structure interaction in concrete structures that support large oil platforms in the North Sea. The dynamic interaction of the fluid both in 2D and 3D are demonstrated through a Computational Fluid Dynamics analysis in the event of explosion following a gas leak inside of the concrete column. The structural response characteristics of the column in water under dynamic conditions are quite complex involving axial, radial and circumferential modes. Fluid structure interaction (FSI) modelling showed that there are some frequencies of the column in water which are not found for a column in air. For example, it was demonstrated that one of the axial breathing modes can never be simulated without the use of FSI models. The occurrence of a shift in magnitude and time of pressure from explosion following gas leak along the height of the shaft not only excited the modes of vibration involving breathing (axial), bending and squashing (radial) modes but also magnified the forces in the column. FSI models revealed that dynamic effects resulted in dynamic amplification of loads. The results are summarized from a detailed study that was carried out by the first author for the Offshore Safety Division of Health & Safety Executive United Kingdom.

Keywords: concrete, explosion, fluid structure interaction, offshore structures

Procedia PDF Downloads 193
30978 An Integrated Framework for Seismic Risk Mitigation Decision Making

Authors: Mojtaba Sadeghi, Farshid Baniassadi, Hamed Kashani

Abstract:

One of the challenging issues faced by seismic retrofitting consultants and employers is quick decision-making on the demolition or retrofitting of a structure at the current time or in the future. For this reason, the existing models proposed by researchers have only covered one of the aspects of cost, execution method, and structural vulnerability. Given the effect of each factor on the final decision, it is crucial to devise a new comprehensive model capable of simultaneously covering all the factors. This study attempted to provide an integrated framework that can be utilized to select the most appropriate earthquake risk mitigation solution for buildings. This framework can overcome the limitations of current models by taking into account several factors such as cost, execution method, risk-taking and structural failure. In the newly proposed model, the database and essential information about retrofitting projects are developed based on the historical data on a retrofit project. In the next phase, an analysis is conducted in order to assess the vulnerability of the building under study. Then, artificial neural networks technique is employed to calculate the cost of retrofitting. While calculating the current price of the structure, an economic analysis is conducted to compare demolition versus retrofitting costs. At the next stage, the optimal method is identified. Finally, the implementation of the framework was demonstrated by collecting data concerning 155 previous projects.

Keywords: decision making, demolition, construction management, seismic retrofit

Procedia PDF Downloads 241
30977 Data Mining in Healthcare for Predictive Analytics

Authors: Ruzanna Muradyan

Abstract:

Medical data mining is a crucial field in contemporary healthcare that offers cutting-edge tactics with enormous potential to transform patient care. This abstract examines how sophisticated data mining techniques could transform the healthcare industry, with a special focus on how they might improve patient outcomes. Healthcare data repositories have dynamically evolved, producing a rich tapestry of different, multi-dimensional information that includes genetic profiles, lifestyle markers, electronic health records, and more. By utilizing data mining techniques inside this vast library, a variety of prospects for precision medicine, predictive analytics, and insight production become visible. Predictive modeling for illness prediction, risk stratification, and therapy efficacy evaluations are important points of focus. Healthcare providers may use this abundance of data to tailor treatment plans, identify high-risk patient populations, and forecast disease trajectories by applying machine learning algorithms and predictive analytics. Better patient outcomes, more efficient use of resources, and early treatments are made possible by this proactive strategy. Furthermore, data mining techniques act as catalysts to reveal complex relationships between apparently unrelated data pieces, providing enhanced insights into the cause of disease, genetic susceptibilities, and environmental factors. Healthcare practitioners can get practical insights that guide disease prevention, customized patient counseling, and focused therapies by analyzing these associations. The abstract explores the problems and ethical issues that come with using data mining techniques in the healthcare industry. In order to properly use these approaches, it is essential to find a balance between data privacy, security issues, and the interpretability of complex models. Finally, this abstract demonstrates the revolutionary power of modern data mining methodologies in transforming the healthcare sector. Healthcare practitioners and researchers can uncover unique insights, enhance clinical decision-making, and ultimately elevate patient care to unprecedented levels of precision and efficacy by employing cutting-edge methodologies.

Keywords: data mining, healthcare, patient care, predictive analytics, precision medicine, electronic health records, machine learning, predictive modeling, disease prognosis, risk stratification, treatment efficacy, genetic profiles, precision health

Procedia PDF Downloads 66
30976 Examples of Techniques and Algorithms Used in Wlan Security

Authors: Vahid Bairami Rad

Abstract:

Wireless communications offer organizations and users many benefits such as portability and flexibility, increased productivity, and lower installation costs. Wireless networks serve as the transport mechanism between devices and among devices and the traditional wired networks (enterprise networks and the internet). Wireless networks are many and diverse but are frequently categorized into three groups based on their coverage range: WWAN, WLAN, and WPAN. WWAN, representing wireless wide area networks, includes wide coverage area technologies such as 2G cellular, Cellular Digital Packet Data (CDPD), Global System for Mobile Communications (GSM), and Mobitex. WLAN, representing wireless local area networks, includes 802.11, Hyper lan, and several others. WPAN, represents wireless personal area network technologies such as Bluetooth and Infrared. The security services are provided largely by the WEP (Wired Equivalent Privacy) protocol to protect link-level data during wireless transmission between clients and access points. That is, WEP does not provide end-to-end security but only for the wireless portion of the connection.

Keywords: wireless lan, wired equivalent privacy, wireless network security, wlan security

Procedia PDF Downloads 573
30975 Embedded Hybrid Intuition: A Deep Learning and Fuzzy Logic Approach to Collective Creation and Computational Assisted Narratives

Authors: Roberto Cabezas H

Abstract:

The current work shows the methodology developed to create narrative lighting spaces for the multimedia performance piece 'cluster: the vanished paradise.' This empirical research is focused on exploring unconventional roles for machines in subjective creative processes, by delving into the semantics of data and machine intelligence algorithms in hybrid technological, creative contexts to expand epistemic domains trough human-machine cooperation. The creative process in scenic and performing arts is guided mostly by intuition; from that idea, we developed an approach to embed collective intuition in computational creative systems, by joining the properties of Generative Adversarial Networks (GAN’s) and Fuzzy Clustering based on a semi-supervised data creation and analysis pipeline. The model makes use of GAN’s to learn from phenomenological data (data generated from experience with lighting scenography) and algorithmic design data (augmented data by procedural design methods), fuzzy logic clustering is then applied to artificially created data from GAN’s to define narrative transitions built on membership index; this process allowed for the creation of simple and complex spaces with expressive capabilities based on position and light intensity as the parameters to guide the narrative. Hybridization comes not only from the human-machine symbiosis but also on the integration of different techniques for the implementation of the aided design system. Machine intelligence tools as proposed in this work are well suited to redefine collaborative creation by learning to express and expand a conglomerate of ideas and a wide range of opinions for the creation of sensory experiences. We found in GAN’s and Fuzzy Logic an ideal tool to develop new computational models based on interaction, learning, emotion and imagination to expand the traditional algorithmic model of computation.

Keywords: fuzzy clustering, generative adversarial networks, human-machine cooperation, hybrid collective data, multimedia performance

Procedia PDF Downloads 144
30974 Bank Concentration and Industry Structure: Evidence from China

Authors: Jingjing Ye, Cijun Fan, Yan Dong

Abstract:

The development of financial sector plays an important role in shaping industrial structure. However, evidence on the micro-level channels through which this relation manifest remains relatively sparse, particularly for developing countries. In this paper, we compile an industry-by-city dataset based on manufacturing firms and registered banks in 287 Chinese cities from 1998 to 2008. Based on a difference-in-difference approach, we find the highly concentrated banking sector decreases the competitiveness of firms in each manufacturing industry. There are two main reasons: i) bank accessibility successfully fosters firm expansion within each industry, however, only for sufficiently large enterprises; ii) state-owned enterprises are favored by the banking industry in China. The results are robust after considering alternative concentration and external finance dependence measures.

Keywords: bank concentration, China, difference-in-difference, industry structure

Procedia PDF Downloads 394
30973 Multiscale Analysis of Shale Heterogeneity in Silurian Longmaxi Formation from South China

Authors: Xianglu Tang, Zhenxue Jiang, Zhuo Li

Abstract:

Characterization of shale multi scale heterogeneity is an important part to evaluate size and space distribution of shale gas reservoirs in sedimentary basins. The origin of shale heterogeneity has always been a hot research topic for it determines shale micro characteristics description and macro quality reservoir prediction. Shale multi scale heterogeneity was discussed based on thin section observation, FIB-SEM, QEMSCAN, TOC, XRD, mercury intrusion porosimetry (MIP), and nitrogen adsorption analysis from 30 core samples in Silurian Longmaxi formation. Results show that shale heterogeneity can be characterized by pore structure and mineral composition. The heterogeneity of shale pore is showed by different size pores at nm-μm scale. Macropores (pore diameter > 50 nm) have a large percentage of pore volume than mesopores (pore diameter between 2~ 50 nm) and micropores (pore diameter < 2nm). However, they have a low specific surface area than mesopores and micropores. Fractal dimensions of the pores from nitrogen adsorption data are higher than 2.7, what are higher than 2.8 from MIP data, showing extremely complex pore structure. This complexity in pore structure is mainly due to the organic matter and clay minerals with complex pore network structures, and diagenesis makes it more complicated. The heterogeneity of shale minerals is showed by mineral grains, lamina, and different lithology at nm-km scale under the continuous changing horizon. Through analyzing the change of mineral composition at each scale, random arrangement of mineral equal proportion, seasonal climate changes, large changes of sedimentary environment, and provenance supply are considered to be the main reasons that cause shale minerals heterogeneity from microcosmic to macroscopic. Due to scale effect, the change of shale multi scale heterogeneity is a discontinuous process, and there is a transformation boundary between homogeneous and in homogeneous. Therefore, a shale multi scale heterogeneity changing model is established by defining four types of homogeneous unit at different scales, which can be used to guide the prediction of shale gas distribution from micro scale to macro scale.

Keywords: heterogeneity, homogeneous unit, multiscale, shale

Procedia PDF Downloads 457
30972 Stress Variation of Underground Building Structure during Top-Down Construction

Authors: Soo-yeon Seo, Seol-ki Kim, Su-jin Jung

Abstract:

In the construction of a building, it is necessary to minimize construction period and secure enough work space for stacking of materials during the construction especially in city area. In this manner, various top-down construction methods have been developed and widely used in Korea. This paper investigates the stress variation of underground structure of a building constructed by using SPS (Strut as Permanent System) known as a top-down method in Korea through an analytical approach. Various types of earth pressure distribution related to ground condition were considered in the structural analysis of an example structure at each step of the excavation. From the analysis, the most high member force acting on beams was found when the ground type was medium sandy soil and a stress concentration was found in corner area.

Keywords: construction of building, top-down construction method, earth pressure distribution, member force, stress concentration

Procedia PDF Downloads 312
30971 Classifying and Analysis 8-Bit to 8-Bit S-Boxes Characteristic Using S-Box Evaluation Characteristic

Authors: Muhammad Luqman, Yusuf Kurniawan

Abstract:

S-Boxes is one of the linear parts of the cryptographic algorithm. The existence of S-Box in the cryptographic algorithm is needed to maintain non-linearity of the algorithm. Nowadays, modern cryptographic algorithms use an S-Box as a part of algorithm process. Despite the fact that several cryptographic algorithms today reuse theoretically secure and carefully constructed S-Boxes, there is an evaluation characteristic that can measure security properties of S-Boxes and hence the corresponding primitives. Analysis of an S-Box usually is done using manual mathematics calculation. Several S-Boxes are presented as a Truth Table without any mathematical background algorithm. Then, it’s rather difficult to determine the strength of Truth Table S-Box without a mathematical algorithm. A comprehensive analysis should be applied to the Truth Table S-Box to determine the characteristic. Several important characteristics should be owned by the S-Boxes, they are Nonlinearity, Balancedness, Algebraic degree, LAT, DAT, differential delta uniformity, correlation immunity and global avalanche criterion. Then, a comprehensive tool will be present to automatically calculate the characteristics of S-Boxes and determine the strength of S-Box. Comprehensive analysis is done on a deterministic process to produce a sequence of S-Boxes characteristic and give advice for a better S-Box construction.

Keywords: cryptographic properties, Truth Table S-Boxes, S-Boxes characteristic, deterministic process

Procedia PDF Downloads 366
30970 A First Step towards Automatic Evolutionary for Gas Lifts Allocation Optimization

Authors: Younis Elhaddad, Alfonso Ortega

Abstract:

Oil production by means of gas lift is a standard technique in oil production industry. To optimize the total amount of oil production in terms of the amount of gas injected is a key question in this domain. Different methods have been tested to propose a general methodology. Many of them apply well-known numerical methods. Some of them have taken into account the power of evolutionary approaches. Our goal is to provide the experts of the domain with a powerful automatic searching engine into which they can introduce their knowledge in a format close to the one used in their domain, and get solutions comprehensible in the same terms, as well. These proposals introduced in the genetic engine the most expressive formal models to represent the solutions to the problem. These algorithms have proven to be as effective as other genetic systems but more flexible and comfortable for the researcher although they usually require huge search spaces to justify their use due to the computational resources involved in the formal models. The first step to evaluate the viability of applying our approaches to this realm is to fully understand the domain and to select an instance of the problem (gas lift optimization) in which applying genetic approaches could seem promising. After analyzing the state of the art of this topic, we have decided to choose a previous work from the literature that faces the problem by means of numerical methods. This contribution includes details enough to be reproduced and complete data to be carefully analyzed. We have designed a classical, simple genetic algorithm just to try to get the same results and to understand the problem in depth. We could easily incorporate the well mathematical model, and the well data used by the authors and easily translate their mathematical model, to be numerically optimized, into a proper fitness function. We have analyzed the 100 curves they use in their experiment, similar results were observed, in addition, our system has automatically inferred an optimum total amount of injected gas for the field compatible with the addition of the optimum gas injected in each well by them. We have identified several constraints that could be interesting to incorporate to the optimization process but that could be difficult to numerically express. It could be interesting to automatically propose other mathematical models to fit both, individual well curves and also the behaviour of the complete field. All these facts and conclusions justify continuing exploring the viability of applying the approaches more sophisticated previously proposed by our research group.

Keywords: evolutionary automatic programming, gas lift, genetic algorithms, oil production

Procedia PDF Downloads 163
30969 Ecopath Analysis of Trophic Structure in Moroccan Mediterranean Sea and Atlantic Ocean

Authors: Salma Aboussalam, Karima Khalil, Khalid Elkalay

Abstract:

The Ecopath model was utilized to evaluate the trophic structure, function, and current status of the Moroccan Mediterranean Sea ecosystem. The model incorporated 31 functional groups, including fish species, invertebrates, primary producers, and detritus. Through the analysis of trophic interactions among these groups, an average trophic transfer efficiency of 23% was found. The findings revealed that the ecosystem produced more energy than it consumed, with high respiration and consumption rates. Indicators of stability and development were low, indicating that the ecosystem is disturbed by a linear trophic structure. Additionally, keystone species were identified through the use of the keystone index and mixed trophic impact analysis, with demersal invertebrates, zooplankton, and cephalopods found to have a significant impact on other groups.

Keywords: ecopath, food web, trophic flux, Moroccan Mediterranean Sea

Procedia PDF Downloads 99
30968 Novel Stator Structure Switching Flux Permanent Magnet Motor

Authors: Mengjie Shen, Jianhua Wu, Chun Gan, Lifeng Zhang, Qingguo Sun

Abstract:

Switching flux permanent magnet (SFPM) motor has doubly salient structure which lead to high torque ripple, and also has cogging torque as a permanent magnet motor. Torque ripple and cogging torque have impact on the motor performance. A novel stator structure SFPM motor is presented in this paper. A triangular shape silicon steel sheet is put in the stator slot to reduce the torque ripple, which will not deteriorate the cogging torque. The simulation of proposed motor is analyzed using 2-D finite element method (FEM) based on Ansoft and Simplorer software, and the result show a good performance of the proposed SFPM motor.

Keywords: switching flux permanent magnet (SFPM) motor, torque ripple, Ansoft, FEM

Procedia PDF Downloads 573
30967 Aerobic Bioprocess Control Using Artificial Intelligence Techniques

Authors: M. Caramihai, Irina Severin

Abstract:

This paper deals with the design of an intelligent control structure for a bioprocess of Hansenula polymorpha yeast cultivation. The objective of the process control is to produce biomass in a desired physiological state. The work demonstrates that the designed Hybrid Control Techniques (HCT) are able to recognize specific evolution bioprocess trajectories using neural networks trained specifically for this purpose, in order to estimate the model parameters and to adjust the overall bioprocess evolution through an expert system and a fuzzy structure. The design of the control algorithm as well as its tuning through realistic simulations is presented. Taking into consideration the synergism of different paradigms like fuzzy logic, neural network, and symbolic artificial intelligence (AI), in this paper we present a real and fulfilled intelligent control architecture with application in bioprocess control.

Keywords: bioprocess, intelligent control, neural nets, fuzzy structure, hybrid techniques

Procedia PDF Downloads 427
30966 Effects of Soil-Structure Interaction on Seismic Performance of Steel Structures Equipped with Viscous Fluid Dampers

Authors: Faramarz Khoshnoudian, Saeed Vosoughiyan

Abstract:

The main goal of this article is to clarify the soil-structure interaction (SSI) effects on the seismic performance of steel moment resisting frame buildings which are rested on soft soil and equipped with viscous fluid dampers (VFDs). For this purpose, detailed structural models of a ten-story SMRF with VFDs excluding and including the SSI are constructed first. In order to simulate the dynamic response of the foundation, in this paper, the simple cone model is applied. Then, the nonlinear time-history analysis of the models is conducted using three kinds of earthquake excitations with different intensities. The analysis results have demonstrated that the SSI effects on the seismic performance of a structure equipped with VFDs and supported by rigid foundation on soft soil need to be considered. Also VFDs designed based on rigid foundation hypothesis fail to achieve the expected seismic objective while SSI goes into effect.

Keywords: nonlinear time-history analysis, soil-structure interaction, steel moment resisting frame building, viscous fluid dampers

Procedia PDF Downloads 338
30965 Workflow Based Inspection of Geometrical Adaptability from 3D CAD Models Considering Production Requirements

Authors: Tobias Huwer, Thomas Bobek, Gunter Spöcker

Abstract:

Driving forces for enhancements in production are trends like digitalization and individualized production. Currently, such developments are restricted to assembly parts. Thus, complex freeform surfaces are not addressed in this context. The need for efficient use of resources and near-net-shape production will require individualized production of complex shaped workpieces. Due to variations between nominal model and actual geometry, this can lead to changes in operations in Computer-aided process planning (CAPP) to make CAPP manageable for an adaptive serial production. In this context, 3D CAD data can be a key to realizing that objective. Along with developments in the geometrical adaptation, a preceding inspection method based on CAD data is required to support the process planner by finding objective criteria to make decisions about the adaptive manufacturability of workpieces. Nowadays, this kind of decisions is depending on the experience-based knowledge of humans (e.g. process planners) and results in subjective decisions – leading to a variability of workpiece quality and potential failure in production. In this paper, we present an automatic part inspection method, based on design and measurement data, which evaluates actual geometries of single workpiece preforms. The aim is to automatically determine the suitability of the current shape for further machining, and to provide a basis for an objective decision about subsequent adaptive manufacturability. The proposed method is realized by a workflow-based approach, keeping in mind the requirements of industrial applications. Workflows are a well-known design method of standardized processes. Especially in applications like aerospace industry standardization and certification of processes are an important aspect. Function blocks, providing a standardized, event-driven abstraction to algorithms and data exchange, will be used for modeling and execution of inspection workflows. Each analysis step of the inspection, such as positioning of measurement data or checking of geometrical criteria, will be carried out by function blocks. One advantage of this approach is its flexibility to design workflows and to adapt algorithms specific to the application domain. In general, within the specified tolerance range it will be checked if a geometrical adaption is possible. The development of particular function blocks is predicated on workpiece specific information e.g. design data. Furthermore, for different product lifecycle phases, appropriate logics and decision criteria have to be considered. For example, tolerances for geometric deviations are different in type and size for new-part production compared to repair processes. In addition to function blocks, appropriate referencing systems are important. They need to support exact determination of position and orientation of the actual geometries to provide a basis for precise analysis. The presented approach provides an inspection methodology for adaptive and part-individual process chains. The analysis of each workpiece results in an inspection protocol and an objective decision about further manufacturability. A representative application domain is the product lifecycle of turbine blades containing a new-part production and a maintenance process. In both cases, a geometrical adaptation is required to calculate individual production data. In contrast to existing approaches, the proposed initial inspection method provides information to decide between different potential adaptive machining processes.

Keywords: adaptive, CAx, function blocks, turbomachinery

Procedia PDF Downloads 300
30964 Kinematic Optimization of Energy Extraction Performances for Flapping Airfoil by Using Radial Basis Function Method and Genetic Algorithm

Authors: M. Maatar, M. Mekadem, M. Medale, B. Hadjed, B. Imine

Abstract:

In this paper, numerical simulations have been carried out to study the performances of a flapping wing used as an energy collector. Metamodeling and genetic algorithms are used to detect the optimal configuration, improving power coefficient and/or efficiency. Radial basis functions and genetic algorithms have been applied to solve this problem. Three optimization factors are controlled, namely dimensionless heave amplitude h₀, pitch amplitude θ₀ and flapping frequency f. ANSYS FLUENT software has been used to solve the principal equations at a Reynolds number of 1100, while the heave and pitch motion of a NACA0015 airfoil has been realized using a developed function (UDF). The results reveal an average power coefficient and efficiency of 0.78 and 0.338 with an inexpensive low-fidelity model and a total relative error of 4.1% versus the simulation. The performances of the simulated optimum RBF-NSGA-II have been improved by 1.2% compared with the validated model.

Keywords: numerical simulation, flapping wing, energy extraction, power coefficient, efficiency, RBF, NSGA-II

Procedia PDF Downloads 52
30963 Synthesis of LiMₓMn₂₋ₓO₄ Doped Co, Ni, Cr and Its Characterization as Lithium Battery Cathode

Authors: Dyah Purwaningsih, Roto Roto, Hari Sutrisno

Abstract:

Manganese dioxide (MnO₂) and its derivatives are among the most widely used materials for the positive electrode in both primary and rechargeable lithium batteries. The MnO₂ derivative compound of LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) is one of the leading candidates for positive electrode materials in lithium batteries as it is abundant, low cost and environmentally friendly. Over the years, synthesis of LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) has been carried out using various methods including sol-gel, gas condensation, spray pyrolysis, and ceramics. Problems with these various methods persist including high cost (so commercially inapplicable) and must be done at high temperature (environmentally unfriendly). This research aims to: (1) synthesize LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) by reflux technique; (2) develop microstructure analysis method from XRD Powder LiMₓMn₂₋ₓO₄ data with the two-stage method; (3) study the electrical conductivity of LiMₓMn₂₋ₓO₄. This research developed the synthesis of LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) with reflux. The materials consisting of Mn(CH₃COOH)₂. 4H₂O and Na₂S₂O₈ were refluxed for 10 hours at 120°C to form β-MnO₂. The doping of Co, Ni and Cr were carried out using solid-state method with LiOH to form LiMₓMn₂₋ₓO₄. The instruments used included XRD, SEM-EDX, XPS, TEM, SAA, TG/DTA, FTIR, LCR meter and eight-channel battery analyzer. Microstructure analysis of LiMₓMn₂₋ₓO₄ was carried out on XRD powder data by two-stage method using FullProf program integrated into WinPlotR and Oscail Program as well as on binding energy data from XPS. The morphology of LiMₓMn₂₋ₓO₄ was studied with SEM-EDX, TEM, and SAA. The thermal stability test was performed with TG/DTA, the electrical conductivity was studied from the LCR meter data. The specific capacity of LiMₓMn₂₋ₓO₄ as lithium battery cathode was tested using an eight-channel battery analyzer. The results showed that the synthesis of LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) was successfully carried out by reflux. The optimal temperature of calcination is 750°C. XRD characterization shows that LiMn₂O₄ has a cubic crystal structure with Fd3m space group. By using the CheckCell in the WinPlotr, the increase of Li/Mn mole ratio does not result in changes in the LiMn₂O₄ crystal structure. The doping of Co, Ni and Cr on LiMₓMn₂₋ₓO₄ (x = 0.02; 0.04; 0; 0.6; 0.08; 0.10) does not change the cubic crystal structure of Fd3m. All the formed crystals are polycrystals with the size of 100-450 nm. Characterization of LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) microstructure by two-stage method shows the shrinkage of lattice parameter and cell volume. Based on its range of capacitance, the conductivity obtained at LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) is an ionic conductivity with varying capacitance. The specific battery capacity at a voltage of 4799.7 mV for LiMn₂O₄; Li₁.₀₈Mn₁.₉₂O₄; LiCo₀.₁Mn₁.₉O₄; LiNi₀.₁Mn₁.₉O₄ and LiCr₀.₁Mn₁.₉O₄ are 88.62 mAh/g; 2.73 mAh/g; 89.39 mAh/g; 85.15 mAh/g; and 1.48 mAh/g respectively.

Keywords: LiMₓMn₂₋ₓO₄, solid-state, reflux, two-stage method, ionic conductivity, specific capacity

Procedia PDF Downloads 197
30962 Copyright Clearance for Artificial Intelligence Training Data: Challenges and Solutions

Authors: Erva Akin

Abstract:

– The use of copyrighted material for machine learning purposes is a challenging issue in the field of artificial intelligence (AI). While machine learning algorithms require large amounts of data to train and improve their accuracy and creativity, the use of copyrighted material without permission from the authors may infringe on their intellectual property rights. In order to overcome copyright legal hurdle against the data sharing, access and re-use of data, the use of copyrighted material for machine learning purposes may be considered permissible under certain circumstances. For example, if the copyright holder has given permission to use the data through a licensing agreement, then the use for machine learning purposes may be lawful. It is also argued that copying for non-expressive purposes that do not involve conveying expressive elements to the public, such as automated data extraction, should not be seen as infringing. The focus of such ‘copy-reliant technologies’ is on understanding language rules, styles, and syntax and no creative ideas are being used. However, the non-expressive use defense is within the framework of the fair use doctrine, which allows the use of copyrighted material for research or educational purposes. The questions arise because the fair use doctrine is not available in EU law, instead, the InfoSoc Directive provides for a rigid system of exclusive rights with a list of exceptions and limitations. One could only argue that non-expressive uses of copyrighted material for machine learning purposes do not constitute a ‘reproduction’ in the first place. Nevertheless, the use of machine learning with copyrighted material is difficult because EU copyright law applies to the mere use of the works. Two solutions can be proposed to address the problem of copyright clearance for AI training data. The first is to introduce a broad exception for text and data mining, either mandatorily or for commercial and scientific purposes, or to permit the reproduction of works for non-expressive purposes. The second is that copyright laws should permit the reproduction of works for non-expressive purposes, which opens the door to discussions regarding the transposition of the fair use principle from the US into EU law. Both solutions aim to provide more space for AI developers to operate and encourage greater freedom, which could lead to more rapid innovation in the field. The Data Governance Act presents a significant opportunity to advance these debates. Finally, issues concerning the balance of general public interests and legitimate private interests in machine learning training data must be addressed. In my opinion, it is crucial that robot-creation output should fall into the public domain. Machines depend on human creativity, innovation, and expression. To encourage technological advancement and innovation, freedom of expression and business operation must be prioritised.

Keywords: artificial intelligence, copyright, data governance, machine learning

Procedia PDF Downloads 87
30961 Social Structure of Corporate Social Responsibility Programme in Pantai Harapan Jaya Village, Bekasi Regency, West Java

Authors: Auliya Adzilatin Uzhma, Ismu Rini Dwi, I. Nyoman Suluh Wijaya

Abstract:

Corporate Social Responsibility (CSR) programme in Pantai Harapan Jaya village is cultivation of mangrove and fishery capital distribution, to achieve the goal the CSR programme needed participation from the society in it. Moeliono in Fahrudin (2011) mentioned that participation from society is based by intrinsic reason from inside people it self and extrinsic reason from the other who related to him. The fundamental connection who caused more boundaries from action which the organization can do called the social structure. The purpose of this research is to know the form of public participation and the social structure typology of the villager and people who is participated in CSR programme. The key actors of the society and key actors of the people who’s participated also can be known. This research use Social Network Analysis method by knew the Rate of Participation, Density and Centrality. The result of the research is people who is involved in the programme is lived in Dusun Pondok Dua and they work in fisheries field. The density value from the participant is 0.516 it’s mean that 51.6% of the people that participated is involved in the same step of CSR programme.

Keywords: social structure, social network analysis, corporate social responsibility, public participation

Procedia PDF Downloads 484
30960 Constructing a Semi-Supervised Model for Network Intrusion Detection

Authors: Tigabu Dagne Akal

Abstract:

While advances in computer and communications technology have made the network ubiquitous, they have also rendered networked systems vulnerable to malicious attacks devised from a distance. These attacks or intrusions start with attackers infiltrating a network through a vulnerable host and then launching further attacks on the local network or Intranet. Nowadays, system administrators and network professionals can attempt to prevent such attacks by developing intrusion detection tools and systems using data mining technology. In this study, the experiments were conducted following the Knowledge Discovery in Database Process Model. The Knowledge Discovery in Database Process Model starts from selection of the datasets. The dataset used in this study has been taken from Massachusetts Institute of Technology Lincoln Laboratory. After taking the data, it has been pre-processed. The major pre-processing activities include fill in missed values, remove outliers; resolve inconsistencies, integration of data that contains both labelled and unlabelled datasets, dimensionality reduction, size reduction and data transformation activity like discretization tasks were done for this study. A total of 21,533 intrusion records are used for training the models. For validating the performance of the selected model a separate 3,397 records are used as a testing set. For building a predictive model for intrusion detection J48 decision tree and the Naïve Bayes algorithms have been tested as a classification approach for both with and without feature selection approaches. The model that was created using 10-fold cross validation using the J48 decision tree algorithm with the default parameter values showed the best classification accuracy. The model has a prediction accuracy of 96.11% on the training datasets and 93.2% on the test dataset to classify the new instances as normal, DOS, U2R, R2L and probe classes. The findings of this study have shown that the data mining methods generates interesting rules that are crucial for intrusion detection and prevention in the networking industry. Future research directions are forwarded to come up an applicable system in the area of the study.

Keywords: intrusion detection, data mining, computer science, data mining

Procedia PDF Downloads 300
30959 Machine Learning-Driven Prediction of Cardiovascular Diseases: A Supervised Approach

Authors: Thota Sai Prakash, B. Yaswanth, Jhade Bhuvaneswar, Marreddy Divakar Reddy, Shyam Ji Gupta

Abstract:

Across the globe, there are a lot of chronic diseases, and heart disease stands out as one of the most perilous. Sadly, many lives are lost to this condition, even though early intervention could prevent such tragedies. However, identifying heart disease in its initial stages is not easy. To address this challenge, we propose an automated system aimed at predicting the presence of heart disease using advanced techniques. By doing so, we hope to empower individuals with the knowledge needed to take proactive measures against this potentially fatal illness. Our approach towards this problem involves meticulous data preprocessing and the development of predictive models utilizing classification algorithms such as Support Vector Machines (SVM), Decision Tree, and Random Forest. We assess the efficiency of every model based on metrics like accuracy, ensuring that we select the most reliable option. Additionally, we conduct thorough data analysis to reveal the importance of different attributes. Among the models considered, Random Forest emerges as the standout performer with an accuracy rate of 96.04% in our study.

Keywords: support vector machines, decision tree, random forest

Procedia PDF Downloads 46
30958 Government Big Data Ecosystem: A Systematic Literature Review

Authors: Syed Iftikhar Hussain Shah, Vasilis Peristeras, Ioannis Magnisalis

Abstract:

Data that is high in volume, velocity, veracity and comes from a variety of sources is usually generated in all sectors including the government sector. Globally public administrations are pursuing (big) data as new technology and trying to adopt a data-centric architecture for hosting and sharing data. Properly executed, big data and data analytics in the government (big) data ecosystem can be led to data-driven government and have a direct impact on the way policymakers work and citizens interact with governments. In this research paper, we conduct a systematic literature review. The main aims of this paper are to highlight essential aspects of the government (big) data ecosystem and to explore the most critical socio-technical factors that contribute to the successful implementation of government (big) data ecosystem. The essential aspects of government (big) data ecosystem include definition, data types, data lifecycle models, and actors and their roles. We also discuss the potential impact of (big) data in public administration and gaps in the government data ecosystems literature. As this is a new topic, we did not find specific articles on government (big) data ecosystem and therefore focused our research on various relevant areas like humanitarian data, open government data, scientific research data, industry data, etc.

Keywords: applications of big data, big data, big data types. big data ecosystem, critical success factors, data-driven government, egovernment, gaps in data ecosystems, government (big) data, literature review, public administration, systematic review

Procedia PDF Downloads 236
30957 Cirrhosis Mortality Prediction as Classification using Frequent Subgraph Mining

Authors: Abdolghani Ebrahimi, Diego Klabjan, Chenxi Ge, Daniela Ladner, Parker Stride

Abstract:

In this work, we use machine learning and novel data analysis techniques to predict the one-year mortality of cirrhotic patients. Data from 2,322 patients with liver cirrhosis are collected at a single medical center. Different machine learning models are applied to predict one-year mortality. A comprehensive feature space including demographic information, comorbidity, clinical procedure and laboratory tests is being analyzed. A temporal pattern mining technic called Frequent Subgraph Mining (FSM) is being used. Model for End-stage liver disease (MELD) prediction of mortality is used as a comparator. All of our models statistically significantly outperform the MELD-score model and show an average 10% improvement of the area under the curve (AUC). The FSM technic itself does not improve the model significantly, but FSM, together with a machine learning technique called an ensemble, further improves the model performance. With the abundance of data available in healthcare through electronic health records (EHR), existing predictive models can be refined to identify and treat patients at risk for higher mortality. However, due to the sparsity of the temporal information needed by FSM, the FSM model does not yield significant improvements. To the best of our knowledge, this is the first work to apply modern machine learning algorithms and data analysis methods on predicting one-year mortality of cirrhotic patients and builds a model that predicts one-year mortality significantly more accurate than the MELD score. We have also tested the potential of FSM and provided a new perspective of the importance of clinical features.

Keywords: machine learning, liver cirrhosis, subgraph mining, supervised learning

Procedia PDF Downloads 136
30956 Frequency Selective Filters for Estimating the Equivalent Circuit Parameters of Li-Ion Battery

Authors: Arpita Mondal, Aurobinda Routray, Sreeraj Puravankara, Rajashree Biswas

Abstract:

The most difficult part of designing a battery management system (BMS) is battery modeling. A good battery model can capture the dynamics which helps in energy management, by accurate model-based state estimation algorithms. So far the most suitable and fruitful model is the equivalent circuit model (ECM). However, in real-time applications, the model parameters are time-varying, changes with current, temperature, state of charge (SOC), and aging of the battery and this make a great impact on the performance of the model. Therefore, to increase the equivalent circuit model performance, the parameter estimation has been carried out in the frequency domain. The battery is a very complex system, which is associated with various chemical reactions and heat generation. Therefore, it’s very difficult to select the optimal model structure. As we know, if the model order is increased, the model accuracy will be improved automatically. However, the higher order model will face the tendency of over-parameterization and unfavorable prediction capability, while the model complexity will increase enormously. In the time domain, it becomes difficult to solve higher order differential equations as the model order increases. This problem can be resolved by frequency domain analysis, where the overall computational problems due to ill-conditioning reduce. In the frequency domain, several dominating frequencies can be found in the input as well as output data. The selective frequency domain estimation has been carried out, first by estimating the frequencies of the input and output by subspace decomposition, then by choosing the specific bands from the most dominating to the least, while carrying out the least-square, recursive least square and Kalman Filter based parameter estimation. In this paper, a second order battery model consisting of three resistors, two capacitors, and one SOC controlled voltage source has been chosen. For model identification and validation hybrid pulse power characterization (HPPC) tests have been carried out on a 2.6 Ah LiFePO₄ battery.

Keywords: equivalent circuit model, frequency estimation, parameter estimation, subspace decomposition

Procedia PDF Downloads 155
30955 Overhead Reduction by Channel Estimation Using Linear Interpolation for Single Carrier Frequency Domain Equalization Transmission

Authors: Min-Su Song, Haeng-Bok Kil, Eui-Rim Jeong

Abstract:

This paper proposes a new method to reduce the overhead by pilots for single carrier frequency domain equalization (SC-FDE) transmission. In the conventional SC-FDE transmission structure, the overhead by transmitting pilot is heavy because the pilot are transmitted at every SC-FDE block. The proposed SC-FDE structure has fewer pilots and many SC-FCE blocks are transmitted between pilots. The channel estimation and equalization is performed at the pilot period and the channels between pilots are estimated through linear interpolation. This reduces the pilot overhead by reducing the pilot transmission compared with the conventional structure, and enables reliable channel estimation and equalization.

Keywords: channel estimation, linear interpolation, pilot overhead, SC-FDE

Procedia PDF Downloads 278
30954 Semantic-Based Collaborative Filtering to Improve Visitor Cold Start in Recommender Systems

Authors: Baba Mbaye

Abstract:

In collaborative filtering recommendation systems, a user receives suggested items based on the opinions and evaluations of a community of users. This type of recommendation system uses only the information (notes in numerical values) contained in a usage matrix as input data. This matrix can be constructed based on users' behaviors or by offering users to declare their opinions on the items they know. The cold start problem leads to very poor performance for new users. It is a phenomenon that occurs at the beginning of use, in the situation where the system lacks data to make recommendations. There are three types of cold start problems: cold start for a new item, a new system, and a new user. We are interested in this article at the cold start for a new user. When the system welcomes a new user, the profile exists but does not have enough data, and its communities with other users profiles are still unknown. This leads to recommendations not adapted to the profile of the new user. In this paper, we propose an approach that improves cold start by using the notions of similarity and semantic proximity between users profiles during cold start. We will use the cold-metadata available (metadata extracted from the new user's data) useful in positioning the new user within a community. The aim is to look for similarities and semantic proximities with the old and current user profiles of the system. Proximity is represented by close concepts considered to belong to the same group, while similarity groups together elements that appear similar. Similarity and proximity are two close but not similar concepts. This similarity leads us to the construction of similarity which is based on: a) the concepts (properties, terms, instances) independent of ontology structure and, b) the simultaneous representation of the two concepts (relations, presence of terms in a document, simultaneous presence of the authorities). We propose an ontology, OIVCSRS (Ontology of Improvement Visitor Cold Start in Recommender Systems), in order to structure the terms and concepts representing the meaning of an information field, whether by the metadata of a namespace, or the elements of a knowledge domain. This approach allows us to automatically attach the new user to a user community, partially compensate for the data that was not initially provided and ultimately to associate a better first profile with the cold start. Thus, the aim of this paper is to propose an approach to improving cold start using semantic technologies.

Keywords: visitor cold start, recommender systems, collaborative filtering, semantic filtering

Procedia PDF Downloads 221
30953 System Identification in Presence of Outliers

Authors: Chao Yu, Qing-Guo Wang, Dan Zhang

Abstract:

The outlier detection problem for dynamic systems is formulated as a matrix decomposition problem with low-rank, sparse matrices and further recast as a semidefinite programming (SDP) problem. A fast algorithm is presented to solve the resulting problem while keeping the solution matrix structure and it can greatly reduce the computational cost over the standard interior-point method. The computational burden is further reduced by proper construction of subsets of the raw data without violating low rank property of the involved matrix. The proposed method can make exact detection of outliers in case of no or little noise in output observations. In case of significant noise, a novel approach based on under-sampling with averaging is developed to denoise while retaining the saliency of outliers and so-filtered data enables successful outlier detection with the proposed method while the existing filtering methods fail. Use of recovered “clean” data from the proposed method can give much better parameter estimation compared with that based on the raw data.

Keywords: outlier detection, system identification, matrix decomposition, low-rank matrix, sparsity, semidefinite programming, interior-point methods, denoising

Procedia PDF Downloads 309