Search results for: applications of big data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 29416

Search results for: applications of big data

27376 Classification of Echo Signals Based on Deep Learning

Authors: Aisulu Tileukulova, Zhexebay Dauren

Abstract:

Radar plays an important role because it is widely used in civil and military fields. Target detection is one of the most important radar applications. The accuracy of detecting inconspicuous aerial objects in radar facilities is lower against the background of noise. Convolutional neural networks can be used to improve the recognition of this type of aerial object. The purpose of this work is to develop an algorithm for recognizing aerial objects using convolutional neural networks, as well as training a neural network. In this paper, the structure of a convolutional neural network (CNN) consists of different types of layers: 8 convolutional layers and 3 layers of a fully connected perceptron. ReLU is used as an activation function in convolutional layers, while the last layer uses softmax. It is necessary to form a data set for training a neural network in order to detect a target. We built a Confusion Matrix of the CNN model to measure the effectiveness of our model. The results showed that the accuracy when testing the model was 95.7%. Classification of echo signals using CNN shows high accuracy and significantly speeds up the process of predicting the target.

Keywords: radar, neural network, convolutional neural network, echo signals

Procedia PDF Downloads 343
27375 Finding Data Envelopment Analysis Targets Using Multi-Objective Programming in DEA-R with Stochastic Data

Authors: R. Shamsi, F. Sharifi

Abstract:

In this paper, we obtain the projection of inefficient units in data envelopment analysis (DEA) in the case of stochastic inputs and outputs using the multi-objective programming (MOP) structure. In some problems, the inputs might be stochastic while the outputs are deterministic, and vice versa. In such cases, we propose a multi-objective DEA-R model because in some cases (e.g., when unnecessary and irrational weights by the BCC model reduce the efficiency score), an efficient decision-making unit (DMU) is introduced as inefficient by the BCC model, whereas the DMU is considered efficient by the DEA-R model. In some other cases, only the ratio of stochastic data may be available (e.g., the ratio of stochastic inputs to stochastic outputs). Thus, we provide a multi-objective DEA model without explicit outputs and prove that the input-oriented MOP DEA-R model in the invariable return to scale case can be replaced by the MOP-DEA model without explicit outputs in the variable return to scale and vice versa. Using the interactive methods for solving the proposed model yields a projection corresponding to the viewpoint of the DM and the analyst, which is nearer to reality and more practical. Finally, an application is provided.

Keywords: DEA-R, multi-objective programming, stochastic data, data envelopment analysis

Procedia PDF Downloads 101
27374 Studies on Lucrative Process Layout for Medium Scale Industries

Authors: Balamurugan Baladhandapani, Ganesh Renganathan, V. R. Sanal Kumar

Abstract:

In this paper a comprehensive review on various factory layouts has been carried out for designing a lucrative process layout for medium scale industries. Industry data base reveals that the end product rejection rate is on the order of 10% amounting large profit loss. In order to avoid these rejection rates and to increase the quality product production an intermediate non-destructive testing facility (INDTF) has been recommended for increasing the overall profit. We observed through detailed case studies that while introducing INDTF to medium scale industries the expensive production process can be avoided to the defective products well before its final shape. Additionally, the defective products identified during the intermediate stage can be effectively utilized for other applications or recycling; thereby the overall wastage of the raw materials can be reduced and profit can be increased. We concluded that the prudent design of a factory layout through critical path method facilitating with INDTF will warrant profitable outcome.

Keywords: intermediate non-destructive testing, medium scale industries, process layout design

Procedia PDF Downloads 497
27373 Integrated Model for Enhancing Data Security Processing Time in Cloud Computing

Authors: Amani A. Saad, Ahmed A. El-Farag, El-Sayed A. Helali

Abstract:

Cloud computing is an important and promising field in the recent decade. Cloud computing allows sharing resources, services and information among the people of the whole world. Although the advantages of using clouds are great, but there are many risks in a cloud. The data security is the most important and critical problem of cloud computing. In this research a new security model for cloud computing is proposed for ensuring secure communication system, hiding information from other users and saving the user's times. In this proposed model Blowfish encryption algorithm is used for exchanging information or data, and SHA-2 cryptographic hash algorithm is used for data integrity. For user authentication process a simple user-name and password is used, the password uses SHA-2 for one way encryption. The proposed system shows an improvement of the processing time of uploading and downloading files on the cloud in secure form.

Keywords: cloud computing, data security, SAAS, PAAS, IAAS, Blowfish

Procedia PDF Downloads 351
27372 Comparison of Statistical Methods for Estimating Missing Precipitation Data in the River Subbasin Lenguazaque, Colombia

Authors: Miguel Cañon, Darwin Mena, Ivan Cabeza

Abstract:

In this work was compared and evaluated the applicability of statistical methods for the estimation of missing precipitations data in the basin of the river Lenguazaque located in the departments of Cundinamarca and Boyacá, Colombia. The methods used were the method of simple linear regression, distance rate, local averages, mean rates, correlation with nearly stations and multiple regression method. The analysis used to determine the effectiveness of the methods is performed by using three statistical tools, the correlation coefficient (r2), standard error of estimation and the test of agreement of Bland and Altmant. The analysis was performed using real rainfall values removed randomly in each of the seasons and then estimated using the methodologies mentioned to complete the missing data values. So it was determined that the methods with the highest performance and accuracy in the estimation of data according to conditions that were counted are the method of multiple regressions with three nearby stations and a random application scheme supported in the precipitation behavior of related data sets.

Keywords: statistical comparison, precipitation data, river subbasin, Bland and Altmant

Procedia PDF Downloads 463
27371 Supramolecular Approach towards Novel Applications: Battery, Band Gap and Gas Separation

Authors: Sudhakara Naidu Neppalli, Tejas S. Bhosale

Abstract:

It is well known that the block copolymer (BCP) can form a complex molecule, through non-covalent bonds such as hydrogen bond, ionic bond and co-ordination bond, with low molecular weight compound as well as with macromolecules, which provide vast applications, includes the alteration of morphology and properties of polymers. Hence we covered the research that, the importance of non-covalent bonds in increasing the non-favourable segmental interactions of the blocks was well examined by attaching and detaching the bonds between the BCP and additive. We also monitored the phase transition of block copolymer and effective interaction parameter (χeff) for Li-doped polymers using small angle x-ray scattering and transmission electron microscopy. The effective interaction parameter (χeff) between two block components was evaluated using Leibler theory based on the incompressible random phase approximation (RPA) for ionized BCP in a disordered state. Furthermore, conductivity experiments demonstrate that the ionic conductivity in the samples quenched from the different structures is morphology-independent, while it increases with increasing ion salt concentration. Morphological transitions, interaction parameter, and thermal stability also examined in quarternized block copolymer. D-spacing was used to estimate effective interaction parameter (χeff) of block components in weak and strong segregation regimes of ordered phase. Metal-containing polymer has been the topic of great attention in recent years due to their wide range of potential application. Similarly, metal- ligand complex is used as a supramolecular linker between the polymers giving rise to a ‘Metallo-Supramolecule assembly. More precisely, functionalized polymer end capped with 2, 2’:6’, 2”- terpyridine ligand can be selectively complexed with wide range of transition metal ions and then subsequently attached to other terpyridine terminated polymer block. In compare to other supramolecular assembly, BCP involved metallo-supramolecule assembly offers vast applications such as optical activity, electrical conductivity, luminescence and photo refractivity.

Keywords: band gap, block copolymer, conductivity, interaction parameter, phase transition

Procedia PDF Downloads 162
27370 Experimental and Finite Element Analysis of Large Deformation Characteristics of Magnetic Responsive Hydrogel Nanocomposites Membranes

Authors: Mallikarjunachari Gangapuram

Abstract:

Stimuli-responsive hydrogel nanocomposite membranes are gaining significant attention these days due to their potential applications in various engineering fields. For example, sensors, soft actuators, drug delivery, remote controlled therapy, water treatment, shape morphing, and magnetic refrigeration are few advanced applications of hydrogel nanocomposite membranes. In this work, hydrogel nanocomposite membranes are synthesized by embedding nanometer-sized (diameter - 300 nm) Fe₃O₄ magnetic particles into the polyvinyl alcohol (PVA) polymer. To understand the large deformation characteristics of these membranes, a well-known experimental method ball indentation technique is used. Different designing parameters such as membrane thickness, the concentration of magnetic particles and ball diameter on the viscoelastic properties are studied. All the experiments are carried out without and with a static magnetic field. Finite element simulations are carried out to validate the experimental results. It is observed, the creep response decreases and Young’s modulus increases as the thickness and concentration of magnetic particles increases. Image analysis revealed the hydrogel membranes are undergone global deformation for ball diameter 18 mm and local deformation when the diameter decreases from 18 mm to 0.5 mm.

Keywords: ball indentation, hydrogel membranes, nanocomposites, Young's modulus

Procedia PDF Downloads 119
27369 Comparison of Different Activators Impact on the Alkali-Activated Aluminium-Silicate Composites

Authors: Laura Dembovska, Ina Pundiene, Diana Bajare

Abstract:

Alkali-activated aluminium-silicate composites (AASC) can be used in the production of innovative materials with a wide range of properties and applications. AASC are associated with low CO₂ emissions; in the production process, it is possible to use industrial by-products and waste, thereby minimizing the use of a non-renewable natural resource. This study deals with the preparation of heat-resistant porous AASC based on chamotte for high-temperature applications up to 1200°C. Different fillers, aluminium scrap recycling waste as pores forming agent and alkali activation with 6M sodium hydroxide (NaOH) and potassium hydroxide (KOH) solution were used. Sodium hydroxide (NaOH) is widely used for the synthesis of AASC compared to potassium hydroxide (KOH), but comparison of using different activator for geopolymer synthesis is not well established. Changes in chemical composition of AASC during heating were identified and quantitatively analyzed by using DTA, dimension changes during the heating process were determined by using HTOM, pore microstructure was examined by SEM, and mineralogical composition of AASC was determined by XRD. Lightweight porous AASC activated with NaOH have been obtained with density in range from 600 to 880 kg/m³ and compressive strength from 0.8 to 2.7 MPa, but for AAM activated with KOH density was in range from 750 to 850 kg/m³ and compressive strength from 0.7 to 2.1 MPa.

Keywords: alkali activation, alkali activated materials, elevated temperature application, heat resistance

Procedia PDF Downloads 174
27368 Analysis of Patent Protection of Bone Tissue Engineering Scaffold Technology

Authors: Yunwei Zhang, Na Li, Yuhong Niu

Abstract:

Bone tissue engineering scaffold was regarded as an important clinical technology of curing bony defect. The patent protection of bone tissue engineering scaffold had been paid more attention and strengthened all over the world. This study analyzed the future development trends of international technologies in the field of bone tissue engineering scaffold and its patent protection. This study used the methods of data classification and classification indexing to analyze 2718 patents retrieved in the patent database. Results showed that the patents coming from United States had a competitive advantage over other countiries in the field of bone tissue engineering scaffold. The number of patent applications by a single company in U.S. was a quarter of that of the world. However, the capability of R&D in China was obviously weaker than global level, patents mainly coming from universities and scientific research institutions. Moreover, it would be predicted that synthetic organic materials as new materials would be gradually replaced by composite materials. The patent technology protections of composite materials would be more strengthened in the future.

Keywords: bone tissue engineering, patent analysis, Scaffold material, patent protection

Procedia PDF Downloads 129
27367 Hyperspectral Data Classification Algorithm Based on the Deep Belief and Self-Organizing Neural Network

Authors: Li Qingjian, Li Ke, He Chun, Huang Yong

Abstract:

In this paper, the method of combining the Pohl Seidman's deep belief network with the self-organizing neural network is proposed to classify the target. This method is mainly aimed at the high nonlinearity of the hyperspectral image, the high sample dimension and the difficulty in designing the classifier. The main feature of original data is extracted by deep belief network. In the process of extracting features, adding known labels samples to fine tune the network, enriching the main characteristics. Then, the extracted feature vectors are classified into the self-organizing neural network. This method can effectively reduce the dimensions of data in the spectrum dimension in the preservation of large amounts of raw data information, to solve the traditional clustering and the long training time when labeled samples less deep learning algorithm for training problems, improve the classification accuracy and robustness. Through the data simulation, the results show that the proposed network structure can get a higher classification precision in the case of a small number of known label samples.

Keywords: DBN, SOM, pattern classification, hyperspectral, data compression

Procedia PDF Downloads 335
27366 Assessing Performance of Data Augmentation Techniques for a Convolutional Network Trained for Recognizing Humans in Drone Images

Authors: Masood Varshosaz, Kamyar Hasanpour

Abstract:

In recent years, we have seen growing interest in recognizing humans in drone images for post-disaster search and rescue operations. Deep learning algorithms have shown great promise in this area, but they often require large amounts of labeled data to train the models. To keep the data acquisition cost low, augmentation techniques can be used to create additional data from existing images. There are many techniques of such that can help generate variations of an original image to improve the performance of deep learning algorithms. While data augmentation is potentially assumed to improve the accuracy and robustness of the models, it is important to ensure that the performance gains are not outweighed by the additional computational cost or complexity of implementing the techniques. To this end, it is important to evaluate the impact of data augmentation on the performance of the deep learning models. In this paper, we evaluated the most currently available 2D data augmentation techniques on a standard convolutional network which was trained for recognizing humans in drone images. The techniques include rotation, scaling, random cropping, flipping, shifting, and their combination. The results showed that the augmented models perform 1-3% better compared to a base network. However, as the augmented images only contain the human parts already visible in the original images, a new data augmentation approach is needed to include the invisible parts of the human body. Thus, we suggest a new method that employs simulated 3D human models to generate new data for training the network.

Keywords: human recognition, deep learning, drones, disaster mitigation

Procedia PDF Downloads 86
27365 The Elastic Field of a Nano-Pore, and the Effective Modulus of Composites with Nano-Pores

Authors: Xin Chen, Moxiao Li, Xuechao Sun, Fei Ti, Shaobao Liu, Feng Xu, Tian Jian Lu

Abstract:

The composite materials with pores have the characteristics of light weight, sound insulation, and heat insulation, and have broad prospects in many fields, including aerospace. In general, the stiffness of such composite is less than the stiffness of the matrix material, limiting their applications. In this paper, we establish a theoretical model to analyze the deformation mechanism of a nano-pore. The interface between the pores and matrix material is described by the Gurtin-Murdoch model. By considering scale effect related with current deformation, we estimate the effective mechanical properties (e.g., effective shear modulus and bulk modulus) of a composite with nano-pores. Due to the scale effect, the elastic field in the composite was changed and local hardening was observed around the nano-pore, and the effective shear modulus and effective bulk modulus were found to be a function of the surface energy. The effective shear modulus increase with the surface energy and decrease with the size of the nano-pores, and the effective bulk modulus decrease with the surface energy and increase with the size of the nano-pores. These results have potential applications in the nanocomposite mechanics and aerospace field.

Keywords: composite mechanics, nano-inhomogeneity, nano-pores, scale effect

Procedia PDF Downloads 128
27364 Optimized Dynamic Bayesian Networks and Neural Verifier Test Applied to On-Line Isolated Characters Recognition

Authors: Redouane Tlemsani, Redouane, Belkacem Kouninef, Abdelkader Benyettou

Abstract:

In this paper, our system is a Markovien system which we can see it like a Dynamic Bayesian Networks. One of the major interests of these systems resides in the complete training of the models (topology and parameters) starting from training data. The Bayesian Networks are representing models of dubious knowledge on complex phenomena. They are a union between the theory of probability and the graph theory in order to give effective tools to represent a joined probability distribution on a set of random variables. The representation of knowledge bases on description, by graphs, relations of causality existing between the variables defining the field of study. The theory of Dynamic Bayesian Networks is a generalization of the Bayesians networks to the dynamic processes. Our objective amounts finding the better structure which represents the relationships (dependencies) between the variables of a dynamic bayesian network. In applications in pattern recognition, one will carry out the fixing of the structure which obliges us to admit some strong assumptions (for example independence between some variables).

Keywords: Arabic on line character recognition, dynamic Bayesian network, pattern recognition, networks

Procedia PDF Downloads 608
27363 Emotional Artificial Intelligence and the Right to Privacy

Authors: Emine Akar

Abstract:

The majority of privacy-related regulation has traditionally focused on concepts that are perceived to be well-understood or easily describable, such as certain categories of data and personal information or images. In the past century, such regulation appeared reasonably suitable for its purposes. However, technologies such as AI, combined with ever-increasing capabilities to collect, process, and store “big data”, not only require calibration of these traditional understandings but may require re-thinking of entire categories of privacy law. In the presentation, it will be explained, against the background of various emerging technologies under the umbrella term “emotional artificial intelligence”, why modern privacy law will need to embrace human emotions as potentially private subject matter. This argument can be made on a jurisprudential level, given that human emotions can plausibly be accommodated within the various concepts that are traditionally regarded as the underlying foundation of privacy protection, such as, for example, dignity, autonomy, and liberal values. However, the practical reasons for regarding human emotions as potentially private subject matter are perhaps more important (and very likely more convincing from the perspective of regulators). In that respect, it should be regarded as alarming that, according to most projections, the usefulness of emotional data to governments and, particularly, private companies will not only lead to radically increased processing and analysing of such data but, concerningly, to an exponential growth in the collection of such data. In light of this, it is also necessity to discuss options for how regulators could address this emerging threat.

Keywords: AI, privacy law, data protection, big data

Procedia PDF Downloads 83
27362 Develop a Conceptual Data Model of Geotechnical Risk Assessment in Underground Coal Mining Using a Cloud-Based Machine Learning Platform

Authors: Reza Mohammadzadeh

Abstract:

The major challenges in geotechnical engineering in underground spaces arise from uncertainties and different probabilities. The collection, collation, and collaboration of existing data to incorporate them in analysis and design for given prospect evaluation would be a reliable, practical problem solving method under uncertainty. Machine learning (ML) is a subfield of artificial intelligence in statistical science which applies different techniques (e.g., Regression, neural networks, support vector machines, decision trees, random forests, genetic programming, etc.) on data to automatically learn and improve from them without being explicitly programmed and make decisions and predictions. In this paper, a conceptual database schema of geotechnical risks in underground coal mining based on a cloud system architecture has been designed. A new approach of risk assessment using a three-dimensional risk matrix supported by the level of knowledge (LoK) has been proposed in this model. Subsequently, the model workflow methodology stages have been described. In order to train data and LoK models deployment, an ML platform has been implemented. IBM Watson Studio, as a leading data science tool and data-driven cloud integration ML platform, is employed in this study. As a Use case, a data set of geotechnical hazards and risk assessment in underground coal mining were prepared to demonstrate the performance of the model, and accordingly, the results have been outlined.

Keywords: data model, geotechnical risks, machine learning, underground coal mining

Procedia PDF Downloads 264
27361 Classification of Poverty Level Data in Indonesia Using the Naïve Bayes Method

Authors: Anung Style Bukhori, Ani Dijah Rahajoe

Abstract:

Poverty poses a significant challenge in Indonesia, requiring an effective analytical approach to understand and address this issue. In this research, we applied the Naïve Bayes classification method to examine and classify poverty data in Indonesia. The main focus is on classifying data using RapidMiner, a powerful data analysis platform. The analysis process involves data splitting to train and test the classification model. First, we collected and prepared a poverty dataset that includes various factors such as education, employment, and health..The experimental results indicate that the Naïve Bayes classification model can provide accurate predictions regarding the risk of poverty. The use of RapidMiner in the analysis process offers flexibility and efficiency in evaluating the model's performance. The classification produces several values to serve as the standard for classifying poverty data in Indonesia using Naive Bayes. The accuracy result obtained is 40.26%, with a moderate recall result of 35.94%, a high recall result of 63.16%, and a low recall result of 38.03%. The precision for the moderate class is 58.97%, for the high class is 17.39%, and for the low class is 58.70%. These results can be seen from the graph below.

Keywords: poverty, classification, naïve bayes, Indonesia

Procedia PDF Downloads 49
27360 Selective and Highly Sensitive Measurement of ¹⁵NH₃ Using Photoacoustic Spectroscopy for Environmental Applications

Authors: Emily Awuor, Helga Huszar, Zoltan Bozoki

Abstract:

Isotope analysis has found numerous applications in the environmental science discipline, most common being the tracing of environmental contaminants on both regional and global scales. Many environmental contaminants contain ammonia (NH₃) since it is the most abundant gas in the atmosphere and its largest sources are from agricultural and industrial activities. NH₃ isotopes (¹⁴NH₃ and ¹⁵NH₃) are therefore important and can be used in the traceability studies of these atmospheric pollutants. The goal of the project is the construction of a photoacoustic spectroscopy system that is capable of measuring ¹⁵NH₃ isotope selectively in terms of its concentration. A further objective is for the system to be robust, easy-to-use, and automated. This is provided by using two telecommunication type near-infrared distributed feedback (DFB) diode lasers and a laser coupler as the light source in the photoacoustic measurement system. The central wavelength of the lasers in use was 1532 nm, with the tuning range of ± 1 nm. In this range, strong absorption lines can be found for both ¹⁴NH₃ and ¹⁵NH₃. For the selective measurement of ¹⁵NH₃, wavelengths were chosen where the cross effect of ¹⁴NH₃ and water vapor is negligible. We completed the calibration of the photoacoustic system, and as a result, the lowest detectable concentration was 3.32 ppm (3Ϭ) in the case of ¹⁵NH₃ and 0.44 ppm (3Ϭ) in the case of ¹⁴NH₃. The results are most useful in the environmental pollution measurement and analysis.

Keywords: ammonia isotope, near-infrared DFB diode laser, photoacoustic spectroscopy, environmental monitoring

Procedia PDF Downloads 140
27359 Effect of Curing Temperature on the Textural and Rheological of Gelatine-SDS Hydrogels

Authors: Virginia Martin Torrejon, Binjie Wu

Abstract:

Gelatine is a protein biopolymer obtained from the partial hydrolysis of animal tissues which contain collagen, the primary structural component in connective tissue. Gelatine hydrogels have attracted considerable research in recent years as an alternative to synthetic materials due to their outstanding gelling properties, biocompatibility and compostability. Surfactants, such as sodium dodecyl sulfate (SDS), are often used in hydrogels solutions as surface modifiers or solubility enhancers, and their incorporation can influence the hydrogel’s viscoelastic properties and, in turn, its processing and applications. Literature usually focuses on studying the impact of formulation parameters (e.g., gelatine content, gelatine strength, additives incorporation) on gelatine hydrogels properties, but processing parameters, such as curing temperature, are commonly overlooked. For example, some authors have reported a decrease in gel strength at lower curing temperatures, but there is a lack of research on systematic viscoelastic characterisation of high strength gelatine and gelatine-SDS systems at a wide range of curing temperatures. This knowledge is essential to meet and adjust the technological requirements for different applications (e.g., viscosity, setting time, gel strength or melting/gelling temperature). This work investigated the effect of curing temperature (10, 15, 20, 23 and 25 and 30°C) on the elastic modulus (G’) and melting temperature of high strength gelatine-SDS hydrogels, at 10 wt% and 20 wt% gelatine contents, by small-amplitude oscillatory shear rheology coupled with Fourier Transform Infrared Spectroscopy. It also correlates the gel strength obtained by rheological measurements with the gel strength measured by texture analysis. Gelatine and gelatine-SDS hydrogels’ rheological behaviour strongly depended on the curing temperature, and its gel strength and melting temperature can be slightly modified to adjust it to given processing and applications needs. Lower curing temperatures led to gelatine and gelatine-SDS hydrogels with considerably higher storage modulus. However, their melting temperature was lower than those gels cured at higher temperatures and lower gel strength. This effect was more considerable at longer timescales. This behaviour is attributed to the development of thermal-resistant structures in the lower strength gels cured at higher temperatures.

Keywords: gelatine gelation kinetics, gelatine-SDS interactions, gelatine-surfactant hydrogels, melting and gelling temperature of gelatine gels, rheology of gelatine hydrogels

Procedia PDF Downloads 92
27358 Web Search Engine Based Naming Procedure for Independent Topic

Authors: Takahiro Nishigaki, Takashi Onoda

Abstract:

In recent years, the number of document data has been increasing since the spread of the Internet. Many methods have been studied for extracting topics from large document data. We proposed Independent Topic Analysis (ITA) to extract topics independent of each other from large document data such as newspaper data. ITA is a method for extracting the independent topics from the document data by using the Independent Component Analysis. The topic represented by ITA is represented by a set of words. However, the set of words is quite different from the topics the user imagines. For example, the top five words with high independence of a topic are as follows. Topic1 = {"scor", "game", "lead", "quarter", "rebound"}. This Topic 1 is considered to represent the topic of "SPORTS". This topic name "SPORTS" has to be attached by the user. ITA cannot name topics. Therefore, in this research, we propose a method to obtain topics easy for people to understand by using the web search engine, topics given by the set of words given by independent topic analysis. In particular, we search a set of topical words, and the title of the homepage of the search result is taken as the topic name. And we also use the proposed method for some data and verify its effectiveness.

Keywords: independent topic analysis, topic extraction, topic naming, web search engine

Procedia PDF Downloads 116
27357 Estimating the Life-Distribution Parameters of Weibull-Life PV Systems Utilizing Non-Parametric Analysis

Authors: Saleem Z. Ramadan

Abstract:

In this paper, a model is proposed to determine the life distribution parameters of the useful life region for the PV system utilizing a combination of non-parametric and linear regression analysis for the failure data of these systems. Results showed that this method is dependable for analyzing failure time data for such reliable systems when the data is scarce.

Keywords: masking, bathtub model, reliability, non-parametric analysis, useful life

Procedia PDF Downloads 553
27356 Minimizing the Impact of Covariate Detection Limit in Logistic Regression

Authors: Shahadut Hossain, Jacek Wesolowski, Zahirul Hoque

Abstract:

In many epidemiological and environmental studies covariate measurements are subject to the detection limit. In most applications, covariate measurements are usually truncated from below which is known as left-truncation. Because the measuring device, which we use to measure the covariate, fails to detect values falling below the certain threshold. In regression analyses, it causes inflated bias and inaccurate mean squared error (MSE) to the estimators. This paper suggests a response-based regression calibration method to correct the deleterious impact introduced by the covariate detection limit in the estimators of the parameters of simple logistic regression model. Compared to the maximum likelihood method, the proposed method is computationally simpler, and hence easier to implement. It is robust to the violation of distributional assumption about the covariate of interest. In producing correct inference, the performance of the proposed method compared to the other competing methods has been investigated through extensive simulations. A real-life application of the method is also shown using data from a population-based case-control study of non-Hodgkin lymphoma.

Keywords: environmental exposure, detection limit, left truncation, bias, ad-hoc substitution

Procedia PDF Downloads 229
27355 Preliminary Design of Maritime Energy Management System: Naval Architectural Approach to Resolve Recent Limitations

Authors: Seyong Jeong, Jinmo Park, Jinhyoun Park, Boram Kim, Kyoungsoo Ahn

Abstract:

Energy management in the maritime industry is being required by economics and in conformity with new legislative actions taken by the International Maritime Organization (IMO) and the European Union (EU). In response, the various performance monitoring methodologies and data collection practices have been examined by different stakeholders. While many assorted advancements in operation and technology are applicable, their adoption in the shipping industry stays small. This slow uptake can be considered due to many different barriers such as data analysis problems, misreported data, and feedback problems, etc. This study presents a conceptual design of an energy management system (EMS) and proposes the methodology to resolve the limitations (e.g., data normalization using naval architectural evaluation, management of misrepresented data, and feedback from shore to ship through management of performance analysis history). We expect this system to make even short-term charterers assess the ship performance properly and implement sustainable fleet control.

Keywords: data normalization, energy management system, naval architectural evaluation, ship performance analysis

Procedia PDF Downloads 443
27354 Cellulose Containing Metal Organic Frameworks in Environmental Applications

Authors: Hossam El-Sayed Emam

Abstract:

As an essential issue for life, water while it’s important for all living organisms. However, the world is dangerously facing the serious problem for the deficiency of the sources of drinking water. Within the aquatic systems, there are various gases, microbes, and other toxic ingredients (chemical compounds and heavy metals) occurred owing to the draining of agricultural and industrial wastewater, resulting in water pollution. On the other hand, fuel (gaseous, liquid, or in solid phase) is one of the extensively consumable energy sources, and owing to its origin from fossil, it contains some sulfur-, nitrogen- and oxygen-based compounds that cause serious problems (toxicity, catalyst poisoning, corrosion, and gum formation andcarcinogenic effects), to be ascribed as undesirable pollutants.MOFs as porous coordinating polymers are superiorly exploited in the adsorption and separationof contaminants for wastewater treatment and fuel purification. The inclusion of highly adsorbent materials like MOFs to be immobilized within cellulosic materialscould be investigated as a new challenge for the separation of contaminants with high efficiency and opportunity for recyclability. Therefore, the current approach ascribes the exploitation of different MOFsimmobilized within cellulose (powder, films, and fabrics)for applications in environmental. Herein, using cellulose containing MOFs in dye removal (degradation and adsorption), pharmaceutical intermediates removal, and fuel purification were summarized.

Keywords: cellulose, MOFs, dye removal, pharmaceutical intermediates, fuel purification

Procedia PDF Downloads 147
27353 Survey on Fiber Optic Deployment for Telecommunications Operators in Ghana: Coverage Gap, Recommendations and Research Directions

Authors: Francis Padi, Solomon Nunoo, John Kojo Annan

Abstract:

The paper "Survey on Fiber Optic Deployment for Telecommunications Operators in Ghana: Coverage Gap, Recommendations and Research Directions" presents a comprehensive survey on the deployment of fiber optic networks for telecommunications operators in Ghana. It addresses the challenges encountered by operators using microwave transmission systems for backhauling traffic and emphasizes the advantages of deploying fiber optic networks. The study delves into the coverage gap, provides recommendations, and outlines research directions to enhance the telecommunications infrastructure in Ghana. Additionally, it evaluates next-generation optical access technologies and architectures tailored to operators' needs. The paper also investigates current technological solutions and regulatory, technical, and economical dimensions related to sharing mobile telecommunication networks in emerging countries. Overall, this paper offers valuable insights into fiber optic network deployment for telecommunications operators in Ghana and suggests strategies to meet the increasing demand for data and mobile applications.

Keywords: survey on fiber optic deployment, coverage gap, recommendations, research directions

Procedia PDF Downloads 11
27352 Development and Investigation of Sustainable Wireless Sensor Networks for forest Ecosystems

Authors: Shathya Duobiene, Gediminas Račiukaitis

Abstract:

Solar-powered wireless sensor nodes work best when they operate continuously with minimal energy consumption. Wireless Sensor Networks (WSNs) are a new technology opens up wide studies, and advancements are expanding the prevalence of numerous monitoring applications and real-time aid for environments. The Selective Surface Activation Induced by Laser (SSAIL) technology is an exciting development that gives the design of WSNs more flexibility in terms of their shape, dimensions, and materials. This research work proposes a methodology for using SSAIL technology for forest ecosystem monitoring by wireless sensor networks. WSN monitoring the temperature and humidity were deployed, and their architectures are discussed. The paper presents the experimental outcomes of deploying newly built sensor nodes in forested areas. Finally, a practical method is offered to extend the WSN's lifespan and ensure its continued operation. When operational, the node is independent of the base station's power supply and uses only as much energy as necessary to sense and transmit data.

Keywords: internet of things (IoT), wireless sensor network, sensor nodes, SSAIL technology, forest ecosystem

Procedia PDF Downloads 68
27351 Non-Targeted Adversarial Object Detection Attack: Fast Gradient Sign Method

Authors: Bandar Alahmadi, Manohar Mareboyana, Lethia Jackson

Abstract:

Today, there are many applications that are using computer vision models, such as face recognition, image classification, and object detection. The accuracy of these models is very important for the performance of these applications. One challenge that facing the computer vision models is the adversarial examples attack. In computer vision, the adversarial example is an image that is intentionally designed to cause the machine learning model to misclassify it. One of very well-known method that is used to attack the Convolution Neural Network (CNN) is Fast Gradient Sign Method (FGSM). The goal of this method is to find the perturbation that can fool the CNN using the gradient of the cost function of CNN. In this paper, we introduce a novel model that can attack Regional-Convolution Neural Network (R-CNN) that use FGSM. We first extract the regions that are detected by R-CNN, and then we resize these regions into the size of regular images. Then, we find the best perturbation of the regions that can fool CNN using FGSM. Next, we add the resulted perturbation to the attacked region to get a new region image that looks similar to the original image to human eyes. Finally, we placed the regions back to the original image and test the R-CNN with the attacked images. Our model could drop the accuracy of the R-CNN when we tested with Pascal VOC 2012 dataset.

Keywords: adversarial examples, attack, computer vision, image processing

Procedia PDF Downloads 187
27350 Experimental and Numerical Investigation of Micro-Welding Process and Applications in Digital Manufacturing

Authors: Khaled Al-Badani, Andrew Norbury, Essam Elmshawet, Glynn Rotwell, Ian Jenkinson , James Ren

Abstract:

Micro welding procedures are widely used for joining materials, developing duplex components or functional surfaces, through various methods such as Micro Discharge Welding or Spot Welding process, which can be found in the engineering, aerospace, automotive, biochemical, biomedical and numerous other industries. The relationship between the material properties, structure and processing is very important to improve the structural integrity and the final performance of the welded joints. This includes controlling the shape and the size of the welding nugget, state of the heat affected zone, residual stress, etc. Nowadays, modern high volume productions require the welding of much versatile shapes/sizes and material systems that are suitable for various applications. Hence, an improved understanding of the micro welding process and the digital tools, which are based on computational numerical modelling linking key welding parameters, dimensional attributes and functional performance of the weldment, would directly benefit the industry in developing products that meet current and future market demands. This paper will introduce recent work on developing an integrated experimental and numerical modelling code for micro welding techniques. This includes similar and dissimilar materials for both ferrous and non-ferrous metals, at different scales. The paper will also produce a comparative study, concerning the differences between the micro discharge welding process and the spot welding technique, in regards to the size effect of the welding zone and the changes in the material structure. Numerical modelling method for the micro welding processes and its effects on the material properties, during melting and cooling progression at different scales, will also be presented. Finally, the applications of the integrated numerical modelling and the material development for the digital manufacturing of welding, is discussed with references to typical application cases such as sensors (thermocouples), energy (heat exchanger) and automotive structures (duplex steel structures).

Keywords: computer modelling, droplet formation, material distortion, materials forming, welding

Procedia PDF Downloads 250
27349 Anisotropic Total Fractional Order Variation Model in Seismic Data Denoising

Authors: Jianwei Ma, Diriba Gemechu

Abstract:

In seismic data processing, attenuation of random noise is the basic step to improve quality of data for further application of seismic data in exploration and development in different gas and oil industries. The signal-to-noise ratio of the data also highly determines quality of seismic data. This factor affects the reliability as well as the accuracy of seismic signal during interpretation for different purposes in different companies. To use seismic data for further application and interpretation, we need to improve the signal-to-noise ration while attenuating random noise effectively. To improve the signal-to-noise ration and attenuating seismic random noise by preserving important features and information about seismic signals, we introduce the concept of anisotropic total fractional order denoising algorithm. The anisotropic total fractional order variation model defined in fractional order bounded variation is proposed as a regularization in seismic denoising. The split Bregman algorithm is employed to solve the minimization problem of the anisotropic total fractional order variation model and the corresponding denoising algorithm for the proposed method is derived. We test the effectiveness of theproposed method for synthetic and real seismic data sets and the denoised result is compared with F-X deconvolution and non-local means denoising algorithm.

Keywords: anisotropic total fractional order variation, fractional order bounded variation, seismic random noise attenuation, split Bregman algorithm

Procedia PDF Downloads 203
27348 The Use of Modern Technologies and Computers in the Archaeological Surveys of Sistan in Eastern Iran

Authors: Mahyar MehrAfarin

Abstract:

The Sistan region in eastern Iran is a significant archaeological area in Iran and the Middle East, encompassing 10,000 square kilometers. Previous archeological field surveys have identified 1662 ancient sites dating from prehistoric periods to the Islamic period. Research Aim: This article aims to explore the utilization of modern technologies and computers in archaeological field surveys in Sistan, Iran, and the benefits derived from their implementation. Methodology: The research employs a descriptive-analytical approach combined with field methods. New technologies and software, such as GPS, drones, magnetometers, equipped cameras, satellite images, and software programs like GIS, Map source, and Excel, were utilized to collect information and analyze data. Findings: The use of modern technologies and computers in archaeological field surveys proved to be essential. Traditional archaeological activities, such as excavation and field surveys, are time-consuming and costly. Employing modern technologies helps in preserving ancient sites, accurately recording archaeological data, reducing errors and mistakes, and facilitating correct and accurate analysis. Creating a comprehensive and accessible database, generating statistics, and producing graphic designs and diagrams are additional advantages derived from the use of efficient technologies in archaeology. Theoretical Importance: The integration of computers and modern technologies in archaeology contributes to interdisciplinary collaborations and facilitates the involvement of specialists from various fields, such as geography, history, art history, anthropology, laboratory sciences, and computer engineering. The utilization of computers in archaeology spanned across diverse areas, including database creation, statistical analysis, graphics implementation, laboratory and engineering applications, and even artificial intelligence, which remains an unexplored area in Iranian archaeology. Data Collection and Analysis Procedures: Information was collected using modern technologies and software, capturing geographic coordinates, aerial images, archeogeophysical data, and satellite images. This data was then inputted into various software programs for analysis, including GIS, Map source, and Excel. The research employed both descriptive and analytical methods to present findings effectively. Question Addressed: The primary question addressed in this research is how the use of modern technologies and computers in archeological field surveys in Sistan, Iran, can enhance archaeological data collection, preservation, analysis, and accessibility. Conclusion: The utilization of modern technologies and computers in archaeological field surveys in Sistan, Iran, has proven to be necessary and beneficial. These technologies aid in preserving ancient sites, accurately recording archaeological data, reducing errors, and facilitating comprehensive analysis. The creation of accessible databases, statistics generation, graphic designs, and interdisciplinary collaborations are further advantages observed. It is recommended to explore the potential of artificial intelligence in Iranian archaeology as an unexplored area. The research has implications for cultural heritage organizations, archaeology students, and universities involved in archaeological field surveys in Sistan and Baluchistan province. Additionally, it contributes to enhancing the understanding and preservation of Iran's archaeological heritage.

Keywords: Iran, sistan, archaeological surveys, computer use, modern technologies

Procedia PDF Downloads 72
27347 NSBS: Design of a Network Storage Backup System

Authors: Xinyan Zhang, Zhipeng Tan, Shan Fan

Abstract:

The first layer of defense against data loss is the backup data. This paper implements an agent-based network backup system used the backup, server-storage and server-backup agent these tripartite construction, and we realize the snapshot and hierarchical index in the NSBS. It realizes the control command and data flow separation, balances the system load, thereby improving the efficiency of the system backup and recovery. The test results show the agent-based network backup system can effectively improve the task-based concurrency, reasonably allocate network bandwidth, the system backup performance loss costs smaller and improves data recovery efficiency by 20%.

Keywords: agent, network backup system, three architecture model, NSBS

Procedia PDF Downloads 452