Search results for: Hybrid Fish-Bee Algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4983

Search results for: Hybrid Fish-Bee Algorithm

1293 A Review of Encryption Algorithms Used in Cloud Computing

Authors: Derick M. Rakgoale, Topside E. Mathonsi, Vusumuzi Malele

Abstract:

Cloud computing offers distributed online and on-demand computational services from anywhere in the world. Cloud computing services have grown immensely over the past years, especially in the past year due to the Coronavirus pandemic. Cloud computing has changed the working environment and introduced work from work phenomenon, which enabled the adoption of technologies to fulfill the new workings, including cloud services offerings. The increased cloud computing adoption has come with new challenges regarding data privacy and its integrity in the cloud environment. Previously advanced encryption algorithms failed to reduce the memory space required for cloud computing performance, thus increasing the computational cost. This paper reviews the existing encryption algorithms used in cloud computing. In the future, artificial neural networks (ANN) algorithm design will be presented as a security solution to ensure data integrity, confidentiality, privacy, and availability of user data in cloud computing. Moreover, MATLAB will be used to evaluate the proposed solution, and simulation results will be presented.

Keywords: cloud computing, data integrity, confidentiality, privacy, availability

Procedia PDF Downloads 102
1292 A New Reliability based Channel Allocation Model in Mobile Networks

Authors: Anujendra, Parag Kumar Guha Thakurta

Abstract:

The data transmission between mobile hosts and base stations (BSs) in Mobile networks are often vulnerable to failure. Thus, efficient link connectivity, in terms of the services of both base stations and communication channels of the network, is required in wireless mobile networks to achieve highly reliable data transmission. In addition, it is observed that the number of blocked hosts is increased due to insufficient number of channels during heavy load in the network. Under such scenario, the channels are allocated accordingly to offer a reliable communication at any given time. Therefore, a reliability-based channel allocation model with acceptable system performance is proposed as a MOO problem in this paper. Two conflicting parameters known as Resource Reuse factor (RRF) and the number of blocked calls are optimized under reliability constraint in this problem. The solution to such MOO problem is obtained through NSGA-II (Non-dominated Sorting Genetic Algorithm). The effectiveness of the proposed model in this work is shown with a set of experimental results.

Keywords: base station, channel, GA, pareto-optimal, reliability

Procedia PDF Downloads 385
1291 Effect of Retention Time on Kitchen Wastewater Treatment Using Mixed Algal-Bacterial Consortia

Authors: Keerthi Katam, Abhinav B. Tirunaghari, Vinod Vadithya, Toshiyuki Shimizu, Satoshi Soda, Debraj Bhattacharyya

Abstract:

Researchers worldwide are increasingly focusing on the removal of carbon and nutrient from wastewater using algal-bacterial hybrid systems. Algae produce oxygen during photosynthesis, which is taken up by heterotrophic bacteria for mineralizing organic carbon to carbon dioxide. This phenomenon reduces the net mechanical aeration requirement of aerobic biological wastewater treatment processes. Consequently, the treatment cost is also reduced. Microalgae also participate in the treatment process by taking up nutrient (N, P) from wastewater. Algal biomass, if harvested, can generate value-added by-products. The aim of the present study was to compare the performance of two systems - System A (mixed microalgae and bacteria) and System B (diatoms and bacteria) in treating kitchen wastewater (KWW). The test reactors were operated at five different solid retention times (SRTs) -2, 4, 6, 8, and 10-days in draw-and-fill mode. The KWW was collected daily from the dining hall-kitchen area of the Indian Institute of Technology Hyderabad. The influent and effluent samples were analyzed for total organic carbon (TOC), total nitrogen (TN) using TOC-L analyzer. A colorimetric method was used to analyze anionic surfactant. Phosphorus (P) and chlorophyll were measured by following standard methods. The TOC, TN, and P of KWW were in the range of 113.5 to 740 mg/L, 2 to 22.8 mg/L, and 1 to 4.5 mg/L, respectively. Both the systems gave similar results with 85% of TOC removal and 60% of TN removal at 10-d SRT. However, the anionic surfactant removal in System A was 99% and 60% in System B. The chlorophyll concentration increased with an increase in SRT in both the systems. At 2-d SRT, no chlorophyll was observed in System B, whereas 0.5 mg/L was observed in System A. At 10-d SRT, the chlorophyll concentration in System A was 7.5 mg/L, whereas it was 4.5 mg/L in System B. Although both the systems showed similar performance in treatment, the increase in chlorophyll concentration suggests that System A demonstrated a better algal-bacterial symbiotic relationship in treating KWW than System B.

Keywords: diatoms, microalgae, retention time, wastewater treatment

Procedia PDF Downloads 105
1290 Application of Association Rule Using Apriori Algorithm for Analysis of Industrial Accidents in 2013-2014 in Indonesia

Authors: Triano Nurhikmat

Abstract:

Along with the progress of science and technology, the development of the industrialized world in Indonesia took place very rapidly. This leads to a process of industrialization of society Indonesia faster with the establishment of the company and the workplace are diverse. Development of the industry relates to the activity of the worker. Where in these work activities do not cover the possibility of an impending crash on either the workers or on a construction project. The cause of the occurrence of industrial accidents was the fault of electrical damage, work procedures, and error technique. The method of an association rule is one of the main techniques in data mining and is the most common form used in finding the patterns of data collection. In this research would like to know how relations of the association between the incidence of any industrial accidents. Therefore, by using methods of analysis association rule patterns associated with combination obtained two iterations item set (2 large item set) when every factor of industrial accidents with a West Jakarta so industrial accidents caused by the occurrence of an electrical value damage = 0.2 support and confidence value = 1, and the reverse pattern with value = 0.2 support and confidence = 0.75.

Keywords: association rule, data mining, industrial accidents, rules

Procedia PDF Downloads 267
1289 Robotic Mini Gastric Bypass Surgery

Authors: Arun Prasad, Abhishek Tiwari, Rekha Jaiswal, Vivek Chaudhary

Abstract:

Background: Robotic Roux en Y gastric bypass is being done for some time but is technically difficult, requiring operating in both the sub diaphragmatic and infracolic compartments of the abdomen. This can mean a dual docking of the robot or a hybrid partial laparoscopic and partial robotic surgery. The Mini /One anastomosis /omega loop gastric bypass (MGB) has the advantage of having all dissection and anastomosis in the supracolic compartment and is therefore suitable technically for robotic surgery. Methods: We have done 208 robotic mini gastric bypass surgeries. The robot is docked above the head of the patient in the midline. Camera port is placed supra umbilically. Two ports are placed on the left side of the patient and one port on the right side of the patient. An assistant port is placed between the camera port and right sided robotic port for use of stapler. Distal stomach is stapled from the lesser curve followed by a vertical sleeve upwards leading to a long sleeve pouch. Jejunum is taken at 200 cm from the duodenojejunal junction and brought up to do a side to side gastrojejunostomy. Results: All patients had a successful robotic procedure. Mean time taken was 85 minutes. There were major intraoperative or post operative complications. No patient needed conversion or re-explorative surgery. Mean excess weight loss over a period of 2 year was about 75%. There was no mortality. Patient satisfaction score was high and was attributed to the good weight loss and minimal dietary modifications that were needed after the procedure. Long term side effects were anemia and bile reflux in a small number of patients. Conclusions: MGB / OAGB is gaining worldwide interest as a short simple procedure that has been shown to very effective and safe bariatric surgery. The purpose of this study was to report on the safety and efficacy of robotic surgery for this procedure. This is the first report of totally robotic mini gastric bypass.

Keywords: MGB, mini gastric bypass, OAGB, robotic bariatric surgery

Procedia PDF Downloads 274
1288 The Influence of Beta Shape Parameters in Project Planning

Authors: Αlexios Kotsakis, Stefanos Katsavounis, Dimitra Alexiou

Abstract:

Networks can be utilized to represent project planning problems, using nodes for activities and arcs to indicate precedence relationship between them. For fixed activity duration, a simple algorithm calculates the amount of time required to complete a project, followed by the activities that comprise the critical path. Program Evaluation and Review Technique (PERT) generalizes the above model by incorporating uncertainty, allowing activity durations to be random variables, producing nevertheless a relatively crude solution in planning problems. In this paper, based on the findings of the relevant literature, which strongly suggests that a Beta distribution can be employed to model earthmoving activities, we utilize Monte Carlo simulation, to estimate the project completion time distribution and measure the influence of skewness, an element inherent in activities of modern technical projects. We also extract the activity criticality index, with an ultimate goal to produce more accurate planning estimations.

Keywords: beta distribution, PERT, Monte Carlo simulation, skewness, project completion time distribution

Procedia PDF Downloads 128
1287 'Low Electronic Noise' Detector Technology in Computed Tomography

Authors: A. Ikhlef

Abstract:

Image noise in computed tomography, is mainly caused by the statistical noise, system noise reconstruction algorithm filters. Since last few years, low dose x-ray imaging became more and more desired and looked as a technical differentiating technology among CT manufacturers. In order to achieve this goal, several technologies and techniques are being investigated, including both hardware (integrated electronics and photon counting) and software (artificial intelligence and machine learning) based solutions. From a hardware point of view, electronic noise could indeed be a potential driver for low and ultra-low dose imaging. We demonstrated that the reduction or elimination of this term could lead to a reduction of dose without affecting image quality. Also, in this study, we will show that we can achieve this goal using conventional electronics (low cost and affordable technology), designed carefully and optimized for maximum detective quantum efficiency. We have conducted the tests using large imaging objects such as 30 cm water and 43 cm polyethylene phantoms. We compared the image quality with conventional imaging protocols with radiation as low as 10 mAs (<< 1 mGy). Clinical validation of such results has been performed as well.

Keywords: computed tomography, electronic noise, scintillation detector, x-ray detector

Procedia PDF Downloads 103
1286 Inverse Problem Method for Microwave Intrabody Medical Imaging

Authors: J. Chamorro-Servent, S. Tassani, M. A. Gonzalez-Ballester, L. J. Roca, J. Romeu, O. Camara

Abstract:

Electromagnetic and microwave imaging (MWI) have been used in medical imaging in the last years, being the most common applications of breast cancer and stroke detection or monitoring. In those applications, the subject or zone to observe is surrounded by a number of antennas, and the Nyquist criterium can be satisfied. Additionally, the space between the antennas (transmitting and receiving the electromagnetic fields) and the zone to study can be prepared in a homogeneous scenario. However, this may differ in other cases as could be intracardiac catheters, stomach monitoring devices, pelvic organ systems, liver ablation monitoring devices, or uterine fibroids’ ablation systems. In this work, we analyzed different MWI algorithms to find the most suitable method for dealing with an intrabody scenario. Due to the space limitations usually confronted on those applications, the device would have a cylindrical configuration of a maximum of eight transmitters and eight receiver antennas. This together with the positioning of the supposed device inside a body tract impose additional constraints in order to choose a reconstruction method; for instance, it inhabitants the use of well-known algorithms such as filtered backpropagation for diffraction tomography (due to the unusual configuration with probes enclosed by the imaging region). Finally, the difficulty of simulating a realistic non-homogeneous background inside the body (due to the incomplete knowledge of the dielectric properties of other tissues between the antennas’ position and the zone to observe), also prevents the use of Born and Rytov algorithms due to their limitations with a heterogeneous background. Instead, we decided to use a time-reversed algorithm (mostly used in geophysics) due to its characteristics of ignoring heterogeneities in the background medium, and of focusing its generated field onto the scatters. Therefore, a 2D time-reversed finite difference time domain was developed based on the time-reversed approach for microwave breast cancer detection. Simultaneously an in-silico testbed was also developed to compare ground-truth dielectric properties with corresponding microwave imaging reconstruction. Forward and inverse problems were computed varying: the frequency used related to a small zone to observe (7, 7.5 and 8 GHz); a small polyp diameter (5, 7 and 10 mm); two polyp positions with respect to the closest antenna (aligned or disaligned); and the (transmitters-to-receivers) antenna combination used for the reconstruction (1-1, 8-1, 8-8 or 8-3). Results indicate that when using the existent time-reversed method for breast cancer here for the different combinations of transmitters and receivers, we found false positives due to the high degrees of freedom and unusual configuration (and the possible violation of Nyquist criterium). Those false positives founded in 8-1 and 8-8 combinations, highly reduced with the 1-1 and 8-3 combination, being the 8-3 configuration de most suitable (three neighboring receivers at each time). The 8-3 configuration creates a region-of-interest reduced problem, decreasing the ill-posedness of the inverse problem. To conclude, the proposed algorithm solves the main limitations of the described intrabody application, successfully detecting the angular position of targets inside the body tract.

Keywords: FDTD, time-reversed, medical imaging, microwave imaging

Procedia PDF Downloads 103
1285 Machine Learning Approach for Yield Prediction in Semiconductor Production

Authors: Heramb Somthankar, Anujoy Chakraborty

Abstract:

This paper presents a classification study on yield prediction in semiconductor production using machine learning approaches. A complicated semiconductor production process is generally monitored continuously by signals acquired from sensors and measurement sites. A monitoring system contains a variety of signals, all of which contain useful information, irrelevant information, and noise. In the case of each signal being considered a feature, "Feature Selection" is used to find the most relevant signals. The open-source UCI SECOM Dataset provides 1567 such samples, out of which 104 fail in quality assurance. Feature extraction and selection are performed on the dataset, and useful signals were considered for further study. Afterward, common machine learning algorithms were employed to predict whether the signal yields pass or fail. The most relevant algorithm is selected for prediction based on the accuracy and loss of the ML model.

Keywords: deep learning, feature extraction, feature selection, machine learning classification algorithms, semiconductor production monitoring, signal processing, time-series analysis

Procedia PDF Downloads 90
1284 Preparation of Novel Silicone/Graphene-based Nanostructured Surfaces as Fouling Release Coatings

Authors: Mohamed S. Selim, Nesreen A. Fatthallah, Shimaa A. Higazy, Zhifeng Hao, Ping Jing Mo

Abstract:

As marine fouling-release (FR) surfaces, two new superhydrophobic nanocomposite series of polydimethylsiloxane (PDMS) loaded with reduced graphene oxide (RGO) and graphene oxide/boehmite nanorods (GO-γ-AlOOH) nanofillers were created. The self-cleaning and antifouling capabilities were modified by controlling the nanofillers' shapes and distribution in the silicone matrix. With an average diameter of 10-20 nm and a length of 200 nm, γ-AlOOH nanorods showed a single crystallinity. RGO was made using a hydrothermal process, whereas GO-γ-AlOOH nanocomposites were made using a chemical deposition method for use as fouling-release coating materials. These nanofillers were disseminated in the silicone matrix using the solution casting method to explore the synergetic effects of graphene-based materials on the surface, mechanical, and FR characteristics. Water contact angle (WCA), scanning electron, and atomic force microscopes were used to investigate the surface's hydrophobicity and antifouling capabilities (SEM and AFM). The roughness, superhydrophobicity, and surface mechanical characteristics of coatings all increased the homogeneity of the nanocomposite dispersion. To examine the antifouling effects of the coating systems, laboratory tests were conducted for 30 days using specified bacteria.PDMS/GO-γ-AlOOH nanorod composite demonstrated superior antibacterial efficacy against several bacterial strains than PDMS/RGO nanocomposite. The high surface area and stabilizing effects of the GO-γ-AlOOH hybrid nanofillers are to blame for this. The biodegradability percentage of the PDMS/GO-γ-AlOOH nanorod composite (3 wt.%) was the lowest (1.6%), while the microbial endurability percentages for gram-positive, gram-negative, and fungi were 86.42%, 97.94%, and 85.97%, respectively. The homogeneity of the GO-γ-AlOOH (3 wt.%) dispersion, which had a WCA of 151° and a rough surface, was the most profound superhydrophobic antifouling nanostructured coating.

Keywords: superhydrophobic nanocomposite, fouling release, nanofillers, surface coating

Procedia PDF Downloads 215
1283 Local Texture and Global Color Descriptors for Content Based Image Retrieval

Authors: Tajinder Kaur, Anu Bala

Abstract:

An image retrieval system is a computer system for browsing, searching, and retrieving images from a large database of digital images a new algorithm meant for content-based image retrieval (CBIR) is presented in this paper. The proposed method combines the color and texture features which are extracted the global and local information of the image. The local texture feature is extracted by using local binary patterns (LBP), which are evaluated by taking into consideration of local difference between the center pixel and its neighbors. For the global color feature, the color histogram (CH) is used which is calculated by RGB (red, green, and blue) spaces separately. In this paper, the combination of color and texture features are proposed for content-based image retrieval. The performance of the proposed method is tested on Corel 1000 database which is the natural database. The results after being investigated show a significant improvement in terms of their evaluation measures as compared to LBP and CH.

Keywords: color, texture, feature extraction, local binary patterns, image retrieval

Procedia PDF Downloads 337
1282 A Flexible Real-Time Eco-Drive Strategy for Electric Minibus

Authors: Felice De Luca, Vincenzo Galdi, Piera Stella, Vito Calderaro, Adriano Campagna, Antonio Piccolo

Abstract:

Sustainable mobility has become one of the major issues of recent years. The challenge in reducing polluting emissions as much as possible has led to the production and diffusion of vehicles with internal combustion engines that are less polluting and to the adoption of green energy vectors, such as vehicles powered by natural gas or LPG and, more recently, with hybrid and electric ones. While on the one hand, the spread of electric vehicles for private use is becoming a reality, albeit rather slowly, not the same is happening for vehicles used for public transport, especially those that operate in the congested areas of the cities. Even if the first electric buses are increasingly being offered on the market, it remains central to the problem of autonomy for battery fed vehicles with high daily routes and little time available for recharging. In fact, at present, solid-state batteries are still too large in size, heavy, and unable to guarantee the required autonomy. Therefore, in order to maximize the energy management on the vehicle, the optimization of driving profiles offer a faster and cheaper contribution to improve vehicle autonomy. In this paper, following the authors’ precedent works on electric vehicles in public transport and energy management strategies in the electric mobility area, an eco-driving strategy for electric bus is presented and validated. Particularly, the characteristics of the prototype bus are described, and a general-purpose eco-drive methodology is briefly presented. The model is firstly simulated in MATLAB™ and then implemented on a mobile device installed on-board of a prototype bus developed by the authors in a previous research project. The solution implemented furnishes the bus-driver suggestions on the guide style to adopt. The result of the test in a real case will be shown to highlight the effectiveness of the solution proposed in terms of energy saving.

Keywords: eco-drive, electric bus, energy management, prototype

Procedia PDF Downloads 115
1281 Blind Super-Resolution Reconstruction Based on PSF Estimation

Authors: Osama A. Omer, Amal Hamed

Abstract:

Successful blind image Super-Resolution algorithms require the exact estimation of the Point Spread Function (PSF). In the absence of any prior information about the imagery system and the true image; this estimation is normally done by trial and error experimentation until an acceptable restored image quality is obtained. Multi-frame blind Super-Resolution algorithms often have disadvantages of slow convergence and sensitiveness to complex noises. This paper presents a Super-Resolution image reconstruction algorithm based on estimation of the PSF that yields the optimum restored image quality. The estimation of PSF is performed by the knife-edge method and it is implemented by measuring spreading of the edges in the reproduced HR image itself during the reconstruction process. The proposed image reconstruction approach is using L1 norm minimization and robust regularization based on a bilateral prior to deal with different data and noise models. A series of experiment results show that the proposed method can outperform other previous work robustly and efficiently.

Keywords: blind, PSF, super-resolution, knife-edge, blurring, bilateral, L1 norm

Procedia PDF Downloads 345
1280 Spatial-Temporal Awareness Approach for Extensive Re-Identification

Authors: Tyng-Rong Roan, Fuji Foo, Wenwey Hseush

Abstract:

Recent development of AI and edge computing plays a critical role to capture meaningful events such as detection of an unattended bag. One of the core problems is re-identification across multiple CCTVs. Immediately following the detection of a meaningful event is to track and trace the objects related to the event. In an extensive environment, the challenge becomes severe when the number of CCTVs increases substantially, imposing difficulties in achieving high accuracy while maintaining real-time performance. The algorithm that re-identifies cross-boundary objects for extensive tracking is referred to Extensive Re-Identification, which emphasizes the issues related to the complexity behind a great number of CCTVs. The Spatial-Temporal Awareness approach challenges the conventional thinking and concept of operations which is labor intensive and time consuming. The ability to perform Extensive Re-Identification through a multi-sensory network provides the next-level insights – creating value beyond traditional risk management.

Keywords: long-short-term memory, re-identification, security critical application, spatial-temporal awareness

Procedia PDF Downloads 95
1279 Optimal Bayesian Chart for Controlling Expected Number of Defects in Production Processes

Authors: V. Makis, L. Jafari

Abstract:

In this paper, we develop an optimal Bayesian chart to control the expected number of defects per inspection unit in production processes with long production runs. We formulate this control problem in the optimal stopping framework. The objective is to determine the optimal stopping rule minimizing the long-run expected average cost per unit time considering partial information obtained from the process sampling at regular epochs. We prove the optimality of the control limit policy, i.e., the process is stopped and the search for assignable causes is initiated when the posterior probability that the process is out of control exceeds a control limit. An algorithm in the semi-Markov decision process framework is developed to calculate the optimal control limit and the corresponding average cost. Numerical examples are presented to illustrate the developed optimal control chart and to compare it with the traditional u-chart.

Keywords: Bayesian u-chart, economic design, optimal stopping, semi-Markov decision process, statistical process control

Procedia PDF Downloads 553
1278 Oil Reservoir Asphalting Precipitation Estimating during CO2 Injection

Authors: I. Alhajri, G. Zahedi, R. Alazmi, A. Akbari

Abstract:

In this paper, an Artificial Neural Network (ANN) was developed to predict Asphaltene Precipitation (AP) during the injection of carbon dioxide into crude oil reservoirs. In this study, the experimental data from six different oil fields were collected. Seventy percent of the data was used to develop the ANN model, and different ANN architectures were examined. A network with the Trainlm training algorithm was found to be the best network to estimate the AP. To check the validity of the proposed model, the model was used to predict the AP for the thirty percent of the data that was unevaluated. The Mean Square Error (MSE) of the prediction was 0.0018, which confirms the excellent prediction capability of the proposed model. In the second part of this study, the ANN model predictions were compared with modified Hirschberg model predictions. The ANN was found to provide more accurate estimates compared to the modified Hirschberg model. Finally, the proposed model was employed to examine the effect of different operating parameters during gas injection on the AP. It was found that the AP is mostly sensitive to the reservoir temperature. Furthermore, the carbon dioxide concentration in liquid phase increases the AP.

Keywords: artificial neural network, asphaltene, CO2 injection, Hirschberg model, oil reservoirs

Procedia PDF Downloads 352
1277 Research on Knowledge Graph Inference Technology Based on Proximal Policy Optimization

Authors: Yihao Kuang, Bowen Ding

Abstract:

With the increasing scale and complexity of knowledge graph, modern knowledge graph contains more and more types of entity, relationship, and attribute information. Therefore, in recent years, it has been a trend for knowledge graph inference to use reinforcement learning to deal with large-scale, incomplete, and noisy knowledge graphs and improve the inference effect and interpretability. The Proximal Policy Optimization (PPO) algorithm utilizes a near-end strategy optimization approach. This allows for more extensive updates of policy parameters while constraining the update extent to maintain training stability. This characteristic enables PPOs to converge to improved strategies more rapidly, often demonstrating enhanced performance early in the training process. Furthermore, PPO has the advantage of offline learning, effectively utilizing historical experience data for training and enhancing sample utilization. This means that even with limited resources, PPOs can efficiently train for reinforcement learning tasks. Based on these characteristics, this paper aims to obtain a better and more efficient inference effect by introducing PPO into knowledge inference technology.

Keywords: reinforcement learning, PPO, knowledge inference

Procedia PDF Downloads 215
1276 Development of Computational Approach for Calculation of Hydrogen Solubility in Hydrocarbons for Treatment of Petroleum

Authors: Abdulrahman Sumayli, Saad M. AlShahrani

Abstract:

For the hydrogenation process, knowing the solubility of hydrogen (H2) in hydrocarbons is critical to improve the efficiency of the process. We investigated the H2 solubility computation in four heavy crude oil feedstocks using machine learning techniques. Temperature, pressure, and feedstock type were considered as the inputs to the models, while the hydrogen solubility was the sole response. Specifically, we employed three different models: Support Vector Regression (SVR), Gaussian process regression (GPR), and Bayesian ridge regression (BRR). To achieve the best performance, the hyper-parameters of these models are optimized using the whale optimization algorithm (WOA). We evaluated the models using a dataset of solubility measurements in various feedstocks, and we compared their performance based on several metrics. Our results show that the WOA-SVR model tuned with WOA achieves the best performance overall, with an RMSE of 1.38 × 10− 2 and an R-squared of 0.991. These findings suggest that machine learning techniques can provide accurate predictions of hydrogen solubility in different feedstocks, which could be useful in the development of hydrogen-related technologies. Besides, the solubility of hydrogen in the four heavy oil fractions is estimated in different ranges of temperatures and pressures of 150 ◦C–350 ◦C and 1.2 MPa–10.8 MPa, respectively

Keywords: temperature, pressure variations, machine learning, oil treatment

Procedia PDF Downloads 50
1275 Aerosol - Cloud Interaction with Summer Precipitation over Major Cities in Eritrea

Authors: Samuel Abraham Berhane, Lingbing Bu

Abstract:

This paper presents the spatiotemporal variability of aerosols, clouds, and precipitation within the major cities in Eritrea and it investigates the relationship between aerosols, clouds, and precipitation concerning the presence of aerosols over the study region. In Eritrea, inadequate water supplies will have both direct and indirect adverse impacts on sustainable development in areas such as health, agriculture, energy, communication, and transport. Besides, there exists a gap in the knowledge on suitable and potential areas for cloud seeding. Further, the inadequate understanding of aerosol-cloud-precipitation (ACP) interactions limits the success of weather modification aimed at improving freshwater sources, storage, and recycling. Spatiotemporal variability of aerosols, clouds, and precipitation involve spatial and time series analysis based on trend and anomaly analysis. To find the relationship between aerosols and clouds, a correlation coefficient is used. The spatiotemporal analysis showed larger variations of aerosols within the last two decades, especially in Assab, indicating that aerosol optical depth (AOD) has increased over the surrounding Red Sea region. Rainfall was significantly low but AOD was significantly high during the 2011 monsoon season. Precipitation was high during 2007 over most parts of Eritrea. The correlation coefficient between AOD and rainfall was negative over Asmara and Nakfa. Cloud effective radius (CER) and cloud optical thickness (COT) exhibited a negative correlation with AOD over Nakfa within the June–July–August (JJA) season. The hybrid single-particle Lagrangian integrated trajectory (HYSPLIT) model that is used to find the path and origin of the air mass of the study region showed that the majority of aerosols made their way to the study region via the westerly and the southwesterly winds.

Keywords: aerosol-cloud-precipitation, aerosol optical depth, cloud effective radius, cloud optical thickness, HYSPLIT

Procedia PDF Downloads 114
1274 Solving Dimensionality Problem and Finding Statistical Constructs on Latent Regression Models: A Novel Methodology with Real Data Application

Authors: Sergio Paez Moncaleano, Alvaro Mauricio Montenegro

Abstract:

This paper presents a novel statistical methodology for measuring and founding constructs in Latent Regression Analysis. This approach uses the qualities of Factor Analysis in binary data with interpretations on Item Response Theory (IRT). In addition, based on the fundamentals of submodel theory and with a convergence of many ideas of IRT, we propose an algorithm not just to solve the dimensionality problem (nowadays an open discussion) but a new research field that promises more fear and realistic qualifications for examiners and a revolution on IRT and educational research. In the end, the methodology is applied to a set of real data set presenting impressive results for the coherence, speed and precision. Acknowledgments: This research was financed by Colciencias through the project: 'Multidimensional Item Response Theory Models for Practical Application in Large Test Designed to Measure Multiple Constructs' and both authors belong to SICS Research Group from Universidad Nacional de Colombia.

Keywords: item response theory, dimensionality, submodel theory, factorial analysis

Procedia PDF Downloads 347
1273 Representativity Based Wasserstein Active Regression

Authors: Benjamin Bobbia, Matthias Picard

Abstract:

In recent years active learning methodologies based on the representativity of the data seems more promising to limit overfitting. The presented query methodology for regression using the Wasserstein distance measuring the representativity of our labelled dataset compared to the global distribution. In this work a crucial use of GroupSort Neural Networks is made therewith to draw a double advantage. The Wasserstein distance can be exactly expressed in terms of such neural networks. Moreover, one can provide explicit bounds for their size and depth together with rates of convergence. However, heterogeneity of the dataset is also considered by weighting the Wasserstein distance with the error of approximation at the previous step of active learning. Such an approach leads to a reduction of overfitting and high prediction performance after few steps of query. After having detailed the methodology and algorithm, an empirical study is presented in order to investigate the range of our hyperparameters. The performances of this method are compared, in terms of numbers of query needed, with other classical and recent query methods on several UCI datasets.

Keywords: active learning, Lipschitz regularization, neural networks, optimal transport, regression

Procedia PDF Downloads 67
1272 Environmental Assessment of Roll-to-Roll Printed Smart Label

Authors: M. Torres, A. Moulay, M. Zhuldybina, M. Rozel, N. D. Trinh, C. Bois

Abstract:

Printed electronics are a fast-growing market as their applications cover a large range of industrial needs, their production cost is low, and the additive printing techniques consume less materials than subtractive manufacturing methods used in traditional electronics. With the growing demand for printed electronics, there are concerns about their harmful and irreversible contribution to the environment. Indeed, it is estimated that 80% of the environmental load of a product is determined by the choices made at the conception stage. Therefore, examination through a life cycle approach at the developing stage of a novel product is the best way to identify potential environmental issues and make proactive decisions. Life cycle analysis (LCA) is a comprehensive scientific method to assess the environmental impacts of a product in its different stages of life: extraction of raw materials, manufacture and distribution, use, and end-of-life. Impacts and major hotspots are identified and evaluated through a broad range of environmental impact categories of the ReCiPe (H) middle point method. At the conception stage, the LCA is a tool that provides an environmental point of view on the choice of materials and processes and weights-in on the balance between performance materials and eco-friendly materials. Using the life cycle approach, the current work aims to provide a cradle-to-grave life cycle assessment of a roll-to-roll hybrid printed smart label designed for the food cold chain. Furthermore, this presentation will present the environmental impact of metallic conductive inks, a comparison with promising conductive polymers, evaluation of energy vs. performance of industrial printing processes, a full assessment of the impact from the smart label applied on a cellulosic-based substrate during the recycling process and the possible recovery of precious metals and rare earth elements.

Keywords: Eco-design, label, life cycle assessment, printed electronics

Procedia PDF Downloads 142
1271 Double Encrypted Data Communication Using Cryptography and Steganography

Authors: Adine Barett, Jermel Watson, Anteneh Girma, Kacem Thabet

Abstract:

In information security, secure communication of data across networks has always been a problem at the forefront. Transfer of information across networks is susceptible to being exploited by attackers engaging in malicious activity. In this paper, we leverage steganography and cryptography to create a layered security solution to protect the information being transmitted. The first layer of security leverages crypto- graphic techniques to scramble the information so that it cannot be deciphered even if the steganography-based layer is compromised. The second layer of security relies on steganography to disguise the encrypted in- formation so that it cannot be seen. We consider three cryptographic cipher methods in the cryptography layer, namely, Playfair cipher, Blowfish cipher, and Hills cipher. Then, the encrypted message is passed through the least significant bit (LSB) to the steganography algorithm for further encryption. Both encryption approaches are combined efficiently to help secure information in transit over a network. This multi-layered encryption is a solution that will benefit cloud platforms, social media platforms and networks that regularly transfer private information such as banks and insurance companies.

Keywords: cryptography, steganography, layered security, Cipher, encryption

Procedia PDF Downloads 62
1270 Efects of Data Corelation in a Sparse-View Compresive Sensing Based Image Reconstruction

Authors: Sajid Abas, Jon Pyo Hong, Jung-Ryun Le, Seungryong Cho

Abstract:

Computed tomography and laminography are heavily investigated in a compressive sensing based image reconstruction framework to reduce the dose to the patients as well as to the radiosensitive devices such as multilayer microelectronic circuit boards. Nowadays researchers are actively working on optimizing the compressive sensing based iterative image reconstruction algorithm to obtain better quality images. However, the effects of the sampled data’s properties on reconstructed the image’s quality, particularly in an insufficient sampled data conditions have not been explored in computed laminography. In this paper, we investigated the effects of two data properties i.e. sampling density and data incoherence on the reconstructed image obtained by conventional computed laminography and a recently proposed method called spherical sinusoidal scanning scheme. We have found that in a compressive sensing based image reconstruction framework, the image quality mainly depends upon the data incoherence when the data is uniformly sampled.

Keywords: computed tomography, computed laminography, compressive sending, low-dose

Procedia PDF Downloads 448
1269 Component Based Testing Using Clustering and Support Vector Machine

Authors: Iqbaldeep Kaur, Amarjeet Kaur

Abstract:

Software Reusability is important part of software development. So component based software development in case of software testing has gained a lot of practical importance in the field of software engineering from academic researcher and also from software development industry perspective. Finding test cases for efficient reuse of test cases is one of the important problems aimed by researcher. Clustering reduce the search space, reuse test cases by grouping similar entities according to requirements ensuring reduced time complexity as it reduce the search time for retrieval the test cases. In this research paper we proposed approach for re-usability of test cases by unsupervised approach. In unsupervised learning we proposed k-mean and Support Vector Machine. We have designed the algorithm for requirement and test case document clustering according to its tf-idf vector space and the output is set of highly cohesive pattern groups.

Keywords: software testing, reusability, clustering, k-mean, SVM

Procedia PDF Downloads 407
1268 Investigation of Doping of CdSe QDs in Organic Semiconductor for Solar Cell Applications

Authors: Ganesh R. Bhand, N. B. Chaure

Abstract:

Cadmium selenide (CdSe) quantum dots (QDs) were prepared by solvothermal route. Subsequently a inorganic QDs-organic semiconductor (copper phthalocyanine) nanocomposite (i.e CuPc:CdSe nanocomposites) were produced by different concentration of QDs varied in CuPc. The nanocomposite thin films have been prepared by means of spin coating technique. The optical, structural and morphological properties of nanocomposite films have been investigated. The transmission electron microscopy (TEM) confirmed the formation of QDs having average size of  4 nm. The X-ray diffraction pattern exhibits cubic crystal structure of CdSe with reflection to (111), (220) and (311) at 25.4ᵒ, 42.2ᵒ and 49.6ᵒ respectively. The additional peak observed at lower angle at 6.9ᵒ in nanocomposite thin films are associated to CuPc. The field emission scanning electron microscopy (FESEM) observed that surface morphology varied in increasing concentration of CdSe QDs. The obtained nanocomposite show significant improvement in the thermal stability as compared to the pure CuPc indicated by thermo-gravimetric analysis (TGA) in thermograph. The effect in the Raman spectra of composites samples gives a confirm evidence of homogenous dispersion of CdSe in the CuPc matrix and their strong interaction between them to promotes charge transfer property. The success of reaction between composite was confirmed by Fourier transform infrared spectroscopy (FTIR). The photo physical properties were studied using UV - visible spectroscopy. The enhancement of the optical absorption in visible region for nanocomposite layer was observed with increasing the concentration of CdSe in CuPc. This composite may obtain the maximized interface between QDs and polymer for efficient charge separation and enhance the charge transport. Such nanocomposite films for potential application in fabrication of hybrid solar cell with improved power conversion efficiency.

Keywords: CdSe QDs, cupper phthalocyanine, FTIR, optical absorption

Procedia PDF Downloads 179
1267 Constructing White-Box Implementations Based on Threshold Shares and Composite Fields

Authors: Tingting Lin, Manfred von Willich, Dafu Lou, Phil Eisen

Abstract:

A white-box implementation of a cryptographic algorithm is a software implementation intended to resist extraction of the secret key by an adversary. To date, most of the white-box techniques are used to protect block cipher implementations. However, a large proportion of the white-box implementations are proven to be vulnerable to affine equivalence attacks and other algebraic attacks, as well as differential computation analysis (DCA). In this paper, we identify a class of block ciphers for which we propose a method of constructing white-box implementations. Our method is based on threshold implementations and operations in composite fields. The resulting implementations consist of lookup tables and few exclusive OR operations. All intermediate values (inputs and outputs of the lookup tables) are masked. The threshold implementation makes the distribution of the masked values uniform and independent of the original inputs, and the operations in composite fields reduce the size of the lookup tables. The white-box implementations can provide resistance against algebraic attacks and DCA-like attacks.

Keywords: white-box, block cipher, composite field, threshold implementation

Procedia PDF Downloads 152
1266 Profit-Based Artificial Neural Network (ANN) Trained by Migrating Birds Optimization: A Case Study in Credit Card Fraud Detection

Authors: Ashkan Zakaryazad, Ekrem Duman

Abstract:

A typical classification technique ranks the instances in a data set according to the likelihood of belonging to one (positive) class. A credit card (CC) fraud detection model ranks the transactions in terms of probability of being fraud. In fact, this approach is often criticized, because firms do not care about fraud probability but about the profitability or costliness of detecting a fraudulent transaction. The key contribution in this study is to focus on the profit maximization in the model building step. The artificial neural network proposed in this study works based on profit maximization instead of minimizing the error of prediction. Moreover, some studies have shown that the back propagation algorithm, similar to other gradient–based algorithms, usually gets trapped in local optima and swarm-based algorithms are more successful in this respect. In this study, we train our profit maximization ANN using the Migrating Birds optimization (MBO) which is introduced to literature recently.

Keywords: neural network, profit-based neural network, sum of squared errors (SSE), MBO, gradient descent

Procedia PDF Downloads 451
1265 Degraded Document Analysis and Extraction of Original Text Document: An Approach without Optical Character Recognition

Authors: L. Hamsaveni, Navya Prakash, Suresha

Abstract:

Document Image Analysis recognizes text and graphics in documents acquired as images. An approach without Optical Character Recognition (OCR) for degraded document image analysis has been adopted in this paper. The technique involves document imaging methods such as Image Fusing and Speeded Up Robust Features (SURF) Detection to identify and extract the degraded regions from a set of document images to obtain an original document with complete information. In case, degraded document image captured is skewed, it has to be straightened (deskew) to perform further process. A special format of image storing known as YCbCr is used as a tool to convert the Grayscale image to RGB image format. The presented algorithm is tested on various types of degraded documents such as printed documents, handwritten documents, old script documents and handwritten image sketches in documents. The purpose of this research is to obtain an original document for a given set of degraded documents of the same source.

Keywords: grayscale image format, image fusing, RGB image format, SURF detection, YCbCr image format

Procedia PDF Downloads 361
1264 High Secure Data Hiding Using Cropping Image and Least Significant Bit Steganography

Authors: Khalid A. Al-Afandy, El-Sayyed El-Rabaie, Osama Salah, Ahmed El-Mhalaway

Abstract:

This paper presents a high secure data hiding technique using image cropping and Least Significant Bit (LSB) steganography. The predefined certain secret coordinate crops will be extracted from the cover image. The secret text message will be divided into sections. These sections quantity is equal the image crops quantity. Each section from the secret text message will embed into an image crop with a secret sequence using LSB technique. The embedding is done using the cover image color channels. Stego image is given by reassembling the image and the stego crops. The results of the technique will be compared to the other state of art techniques. Evaluation is based on visualization to detect any degradation of stego image, the difficulty of extracting the embedded data by any unauthorized viewer, Peak Signal-to-Noise Ratio of stego image (PSNR), and the embedding algorithm CPU time. Experimental results ensure that the proposed technique is more secure compared with the other traditional techniques.

Keywords: steganography, stego, LSB, crop

Procedia PDF Downloads 252